The application of crowdsourcing approaches to cancer research: a systematic review


Paper by Young Ji Lee, Janet A. Arida, and Heidi S. Donovan at Cancer Medicine: “Crowdsourcing is “the practice of obtaining participants, services, ideas, or content by soliciting contributions from a large group of people, especially via the Internet.” (Ranard et al. J. Gen. Intern. Med. 29:187, 2014) Although crowdsourcing has been adopted in healthcare research and its potential for analyzing large datasets and obtaining rapid feedback has recently been recognized, no systematic reviews of crowdsourcing in cancer research have been conducted. Therefore, we sought to identify applications of and explore potential uses for crowdsourcing in cancer research. We conducted a systematic review of articles published between January 2005 and June 2016 on crowdsourcing in cancer research, using PubMed, CINAHL, Scopus, PsychINFO, and Embase. Data from the 12 identified articles were summarized but not combined statistically. The studies addressed a range of cancers (e.g., breast, skin, gynecologic, colorectal, prostate). Eleven studies collected data on the Internet using web-based platforms; one recruited participants in a shopping mall using paper-and-pen data collection. Four studies used Amazon Mechanical Turk for recruiting and/or data collection. Study objectives comprised categorizing biopsy images (n = 6), assessing cancer knowledge (n = 3), refining a decision support system (n = 1), standardizing survivorship care-planning (n = 1), and designing a clinical trial (n = 1). Although one study demonstrated that “the wisdom of the crowd” (NCI Budget Fact Book, 2017) could not replace trained experts, five studies suggest that distributed human intelligence could approximate or support the work of trained experts. Despite limitations, crowdsourcing has the potential to improve the quality and speed of research while reducing costs. Longitudinal studies should confirm and refine these findings….(More)”

Civic Creativity: Role-Playing Games in Deliberative Process


Eric Gordon, Jason Haas, and Becky Michelson at the International Journal of Communication: “This article analyzes the use of a role-playing game in a civic planning process. We focus on the qualities of interactions generated through gameplay, specifically the affordances of voluntary play within a “magic circle” of the game, that directly impact participants’ ability to generate new ideas about the community. We present the results of a quasi-experimental study where a role-playing game (RPG) called @Stake is incorporated into participatory budgeting meetings in New York City and compared with meetings that incorporated a trivia game. We provide evidence that the role-playing game, which encourages empathy, is more effective than a game that tests knowledge for generating what we call civic creativity, or an individual’s ability to come up with new ideas. Rapid ideation and social learning nurtured by the game point to a kind of group creativity that fosters social connection and understanding of consequence outside of the game. We conclude with thoughts on future research….(More)”.

Let’s create a nation of social scientists


Geoff Mulgan in Times Higher Education: “How might social science become more influential, more relevant and more useful in the years to come?

Recent debates about impact have largely assumed a model of social science in which a cadre of specialists, based in universities, analyse and interpret the world and then feed conclusions into an essentially passive society. But a very different view sees specialists in the academy working much more in partnership with a society that is itself skilled in social science, able to generate hypotheses, gather data, experiment and draw conclusions that might help to answer the big questions of our time, from the sources of inequality to social trust, identity to violence.

There are some powerful trends to suggest that this second view is gaining traction. The first of these is the extraordinary explosion of new ways to observe social phenomena. Every day each of us leaves behind a data trail of who we talk to, what we eat and where we go. It’s easier than ever to survey people, to spot patterns, to scrape the web or to pick up data from sensors. It’s easier than ever to gather perceptions and emotions as well as material facts and easier than ever for organisations to practice social science – whether investment organisations analysing market patterns, human resources departments using behavioural science, or local authorities using ethnography.

That deluge of data is a big enough shift on its own. However, it is also now being used to feed interpretive and predictive tools using artificial intelligence to predict who is most likely to go to hospital, to end up in prison, which relationships are most likely to end in divorce.

Governments are developing their own predictive tools, and have also become much more interested in systematic experimentation, with Finland and Canada in the lead,  moving us closer to Karl Popper’s vision of “methods of trial and error, of inventing hypotheses which can be practically tested…”…

The second revolution is less visible but could be no less profound. This is the hunger of many people to be creators of knowledge, not just users; to be part of a truly collective intelligence. At the moment this shift towards mass engagement in knowledge is most visible in neighbouring fields.  Digital humanities mobilise many volunteers to input data and interpret texts – for example making ancient Arabic texts machine-readable. Even more striking is the growth of citizen science – eBird had 1.5 million reports last January; some 1.5 million people in the US monitor river streams and lakes, and SETI@home has 5 million volunteers. Thousands of patients also take part in funding and shaping research on their own conditions….

We’re all familiar with the old idea that it’s better to teach a man to fish than just to give him fish. In essence these trends ask us a simple question: why not apply the same logic to social science, and why not reorient social sciences to enhance the capacity of society itself to observe, analyse and interpret?…(More)”.

BBC Four to investigate how flu pandemic spreads by launching BBC Pandemic app


BBC Press Release: “In a first of its kind nationwide citizen science experiment, Dr Hannah Fry is asking volunteers to download the BBC Pandemic App onto their smartphones. The free app will anonymously collect vital data on how far users travel over a 24 hour period. Users will be asked for information about the number of people they have come into contact with during this time. This data will be used to simulate the spread of a highly infectious disease to see what might happen when – not if – a real pandemic hits the UK.

By partnering with researchers at the University of Cambridge and the London School of Hygiene and Tropical Medicine, the BBC Pandemic app will identify the human networks and behaviours that spread infectious disease. The data collated from the app will help improve public health planning and outbreak control.

The results of the experiment will be revealed in a 90 minute landmark documentary, BBC Pandemic which will air in spring 2018 on BBC Four with Dr Hannah Fry and Dr Javid Abdelmoneim. The pair will chart the creation of the first ever life-saving pandemic, provide new insight into the latest pandemic science and use the data collected by the BBC Pandemic app to chart how an outbreak would spread across the UK.

In the last 100 years there have been four major flu pandemics including the Spanish Influenza outbreak of 1918 that killed up to 100 million people world wide. The Government National Risk Register estimates that infectious diseases are an even greater risk since 2015 and pandemic flu is the key concern as 50% of the population could be affected.

“Nobody knows when the next epidemic will hit, how far it will spread, or how many people will be affected. And yet, because of the power of mathematics, we can still be prepared for whatever lies ahead. What’s really important is that every single download will help improve our models so please please do take part – it will make a difference.” explains Dr Fry.

Dr Abdelmoneim says: “We shouldn’t underestimate the flu virus. It could easily be the cause of a major pandemic that could sweep around the world in a matter of weeks. I’m really excited about the BBC Pandemic app. If it can help predict the spread of a disease and be used to work out ways to slow that spread, it will be much easier for society and our healthcare system to manage”.

Cassian Harrison, Editor BBC Four says: “This is a bold and tremendously exciting project; bringing genuine insight and discovery, and taking BBC Four’s Experimental brief absolutely literally!”…(More)”

Crowdsourcing Accountability: ICT for Service Delivery


Paper by Guy GrossmanMelina Platas and Jonathan Rodden: “We examine the effect on service delivery outcomes of a new information communication technology (ICT) platform that allows citizens to send free and anonymous messages to local government officials, thus reducing the cost and increasing the efficiency of communication about public services. In particular, we use a field experiment to assess the extent to which the introduction of this ICT platform improved monitoring by the district, effort by service providers, and inputs at service points in health, education and water in Arua District, Uganda. Despite relatively high levels of system uptake, enthusiasm of district officials, and anecdotal success stories, we find evidence of only marginal and uneven short-term improvements in health and water services, and no discernible long-term effects. Relatively few messages from citizens provided specific, actionable information about service provision within the purview and resource constraints of district officials, and users were often discouraged by officials’ responses. Our findings suggest that for crowd-sourced ICT programs to move from isolated success stories to long-term accountability enhancement, the quality and specific content of reports and responses provided by users and officials is centrally important….(More)”.

Patient Power: Crowdsourcing in Cancer


Bonnie J. Addario at the HuffPost: “…Understanding how to manage and manipulate vast sums of medical data to improve research and treatments has become a top priority in the cancer enterprise. Researchers at the University of North Carolina Chapel Hill are using IBM’s Watson and its artificial intelligence computing power to great effect. Dr. Norman Sharpless told Charlie Rose from CBS’ 60 Minutes that Watson is reading tens of millions of medical papers weekly (8,000 new cancer research papers are published every day) and regularly scanning the web for new clinical trials most people, including researchers, are unaware of. The task is “essentially undoable” he said, for even the best, well-informed experts.

UNC’s effort is truly wonderful albeit a macro approach, less tailored and accessible only to certain medical centers. My experience tells me what the real problem is: How does a patient newly diagnosed with lung cancer, fragile and scared find the most relevant information without being overwhelmed and giving up? If the experts can’t easily find key data without Watson’s help, and Google’s first try turns up millions upon millions of semi-useful results, how do we build hope that there are good online answers for our patients?

We’ve thought about this a lot at the Addario Lung Cancer Foundation and figured out that the answer lies with the patients themselves. Why not crowdsource it with people who have lung cancer, their caregivers and family members?

So, we created the first-ever global Lung Cancer Patient Registry that simplifies the collection, management and distribution of critical health-related information – all in one place so that researchers and patients can easily access and find data specific to lung cancer patients.

This is a data-rich environment for those focusing solely on finding a cure for lung cancer. And it gives patients access to other patients to compare notes and generally feel safe sharing intimate details with their peers….(More)”

Dictionaries and crowdsourcing, wikis and user-generated content


Living Reference Work Entry by Michael Rundel: “It is tempting to dismiss crowdsourcing as a largely trivial recent development which has nothing useful to contribute to serious lexicography. This temptation should be resisted. When applied to dictionary-making, the broad term “crowdsourcing” in fact describes a range of distinct methods for creating or gathering linguistic data. A provisional typology is proposed, distinguishing three approaches which are often lumped under the heading “crowdsourcing.” These are: user-generated content (UGC), the wiki model, and what is referred to here as “crowd-sourcing proper.” Each approach is explained, and examples are given of their applications in linguistic and lexicographic projects. The main argument of this chapter is that each of these methods – if properly understood and carefully managed – has significant potential for lexicography. The strengths and weaknesses of each model are identified, and suggestions are made for exploiting them in order to facilitate or enhance different operations within the process of developing descriptions of language. Crowdsourcing – in its various forms – should be seen as an opportunity rather than as a threat or diversion….(More)”.

Crowdsourcing website is helping volunteers save lives in hurricane-hit Houston


By Monday morning, the 27-year-old developer, sitting in his leaky office, had slapped together an online mapping tool to track stranded residents. A day later, nearly 5,000 people had registered to be rescued, and 2,700 of them were safe.

If there’s a silver lining to Harvey, it’s the flood of civilian volunteers such as Marchetti who have joined the rescue effort. It became pretty clear shortly after the storm started pounding Houston that the city would need their help. The heavy rains quickly outstripped authorities’ ability to respond. People watched water levels rise around them while they waited on hold to get connected to a 911 dispatcher. Desperate local officials asked owners of high-water vehicles and boats to help collect their fellow citizens trapped on second-stories and roofs.

In the past, disaster volunteers have relied on social media and Zello, an app that turns your phone into a walkie-talkie, to organize. … Harvey’s magnitude, both in terms of damage and the number of people anxious to pitch in, also overwhelmed those grassroots organizing methods, says Marchetti, who spent the spent the first days after the storm hit monitoring Facebook and Zello to figure out what was needed where.

“The channels were just getting overloaded with people asking ‘Where do I go?’” he says. “We’ve tried to cut down on the level of noise.”

The idea behind his project, Houstonharveyrescue.com, is simple. The map lets people in need register their location. They are asked to include details—for example, if they’re sick or have small children—and their cell phone numbers.

The army of rescuers, who can also register on the site, can then easily spot the neediest cases. A team of 100 phone dispatchers follows up with those wanting to be rescued, and can send mass text messages with important information. An algorithm weeds out any repeats.

It might be one of the first open-sourced rescue missions in the US, and could be a valuable blueprint for future disaster volunteers. (For a similar civilian-led effort outside the US, look at Tijuana’s Strategic Committee for Humanitarian Aid, a Facebook group that sprouted last year when the Mexican border city was overwhelmed by a wave of Haitian immigrants.)…(More)”.

Crowdsourcing the Charlottesville Investigation


Internet sleuths got to work, and by Monday morning they were naming names and calling for arrests.

The name of the helmeted man went viral after New York Daily News columnist Shaun King posted a series of photos on Twitter and Facebook that more clearly showed his face and connected him to photos from a Facebook account. “Neck moles gave it away,” King wrote in his posts, which were shared more than 77,000 times. But the name of the red-bearded assailant was less clear: some on Twitter claimed it was a Texas man who goes by a Nordic alias online. Others were sure it was a Michigan man who, according to Facebook, attended high school with other white nationalist demonstrators depicted in photos from Charlottesville.

After being contacted for comment by The Marshall Project, the Michigan man removed his Facebook page from public view.

Such speculation, especially when it is not conclusive, has created new challenges for law enforcement. There is the obvious risk of false identification. In 2013, internet users wrongly identified university student Sunil Tripathi as a suspect in the Boston marathon bombing, prompting the internet forum Reddit to issue an apology for fostering “online witch hunts.” Already, an Arkansas professor was misidentified as as a torch-bearing protester, though not a criminal suspect, at the Charlottesville rallies.

Beyond the cost to misidentified suspects, the crowdsourced identification of criminal suspects is both a benefit and burden to investigators.

“If someone says: ‘hey, I have a picture of someone assaulting another person, and committing a hate crime,’ that’s great,” said Sgt. Sean Whitcomb, the spokesman for the Seattle Police Department, which used social media to help identify the pilot of a drone that crashed into a 2015 Pride Parade. (The man was convicted in January.) “But saying, ‘I am pretty sure that this person is so and so’. Well, ‘pretty sure’ is not going to cut it.”

Still, credible information can help police establish probable cause, which means they can ask a judge to sign off on either a search warrant, an arrest warrant, or both….(More)“.

Crowdsourcing citizen science: exploring the tensions between paid professionals and users


Jamie Woodcock et al in the Journal of Peer Production: “This paper explores the relationship between paid labour and unpaid users within the Zooniverse, a crowdsourced citizen science platform. The platform brings together a crowd of users to categorise data for use in scientific projects. It was initially established by a small group of academics for a single astronomy project, but has now grown into a multi-project platform that has engaged over 1.3 million users so far. The growth has introduced different dynamics to the platform as it has incorporated a greater number of scientists, developers, links with organisations, and funding arrangements—each bringing additional pressures and complications. The relationships between paid/professional and unpaid/citizen labour have become increasingly complicated with the rapid expansion of the Zooniverse. The paper draws on empirical data from an ongoing research project that has access to both users and paid professionals on the platform. There is the potential through growing peer-to-peer capacity that the boundaries between professional and citizen scientists can become significantly blurred. The findings of the paper, therefore, address important questions about the combinations of paid and unpaid labour, the involvement of a crowd in citizen science, and the contradictions this entails for an online platform. These are considered specifically from the viewpoint of the users and, therefore, form a new contribution to the theoretical understanding of crowdsourcing in practice….(More)”.