Paper by Liz Dowthwaite et al: “Increasing motivation to contribute to online citizen science projects can improve user experience and is critical in retaining and attracting users. Drawing on previous studies of motivation, this paper suggests self-determination theory as a framework for explaining the psychological constructs behind participation in Citizen Science. Through examining existing studies of motivation for 6 Zooniverse projects through this lens, the paper suggests how appealing to basic psychological needs could increase participation in online citizen science, considering current practices and directions for future developments and research….(More)”.
Citizen science and the United Nations Sustainable Development Goals
Steffen Fritz et al in Nature: “Traditional data sources are not sufficient for measuring the United Nations Sustainable Development Goals. New and non-traditional sources of data are required. Citizen science is an emerging example of a non-traditional data source that is already making a contribution. In this Perspective, we present a roadmap that outlines how citizen science can be integrated into the formal Sustainable Development Goals reporting mechanisms. Success will require leadership from the United Nations, innovation from National Statistical Offices and focus from the citizen-science community to identify the indicators for which citizen science can make a real contribution….(More)”.
To What Extent Does the EU General Data Protection Regulation (GDPR) Apply to Citizen Scientist-led Health Research with Mobile Devices?
Article by Edward Dove and Jiahong Chen: “In this article, we consider the possible application of the European General Data Protection Regulation (GDPR) to “citizen scientist”-led health research with mobile devices. We argue that the GDPR likely does cover this activity, depending on the specific context and the territorial scope. Remaining open questions that result from our analysis lead us to call for a lex specialis that would provide greater clarity and certainty regarding the processing of health data for research purposes, including by these non-traditional researchers…(More)”.
Massive Citizen Science Effort Seeks to Survey the Entire Great Barrier Reef
Jessica Wynne Lockhart at Smithsonian: “In August, marine biologists Johnny Gaskell and Peter Mumby and a team of researchers boarded a boat headed into unknown waters off the coasts of Australia. For 14 long hours, they ploughed over 200 nautical miles, a Google Maps cache as their only guide. Just before dawn, they arrived at their destination of a previously uncharted blue hole—a cavernous opening descending through the seafloor.
After the rough night, Mumby was rewarded with something he hadn’t seen in his 30-year career. The reef surrounding the blue hole had nearly 100 percent healthy coral cover. Such a find is rare in the Great Barrier Reef, where coral bleaching events in 2016 and 2017 led to headlines proclaiming the reef “dead.”
“It made me think, ‘this is the story that people need to hear,’” Mumby says.
The expedition from Daydream Island off the coast of Queensland was a pilot program to test the methodology for the Great Reef Census, a citizen science project headed by Andy Ridley, founder of the annual conservation event Earth Hour. His latest organization, Citizens of the Great Barrier Reef, has set the ambitious goal of surveying the entire 1,400-mile-long reef system in 2020…(More)”.
GROW Citizens’ Observatory: Leveraging the power of citizens, open data and technology to generate engagement, and action on soil policy and soil moisture monitoring
Paper by M. Woods et al: “Citizens’ Observatories (COs) seek to extend conventional citizen science activities to scale up the potential of citizen sensing for environmental monitoring and creation of open datasets, knowledge and action around environmental issues, both local and global. The GROW CO has connected the planetary dimension of satellites with the hyperlocal context of farmers and their soil. GROW has faced three main interrelated challenges associated with each of the three core audiences of the observatory, namely citizens, scientists and policy makers: one is sustained citizen engagement, quality assurance of citizen-generated data and the challenge to move from data to action in practice and policy. We discuss how each of these challenges were overcome and gave way to the following related project outputs: 1) Contributing to satellite validation and enhancing the collective intelligence of GEOSS 2) Dynamic maps and visualisations for growers, scientists and policy makers 3) Social-technical innovations data art…(More)”.
Aliens in Europe. An open approach to involve more people in invasive species detection
Paper by Sven Schade et al: “Amplified by the phenomenon of globalisation, such as increased human mobility and the worldwide shipping of goods, we observe an increasing spread of animals and plants outside their native habitats. A few of these ‘aliens’ have negative impacts on their environment, including threats to local biodiversity, agricultural productivity, and human health. Our work addresses these threats, particularly within the European Union (EU), where a related legal framework has been established. We follow an open and participatory approach that allows more people to share their experiences of invasive alien species (IAS) in their surroundings. Over the past three years, we developed a mobile phone application, together with the underlying data management and validation infrastructure, which allows smartphone users to report a selected list of IAS. We put quality assurance and data integration mechanisms into place that allows the uptake of information into existing official systems in order to make it accessible to the relevant policy-making at EU level.
This article summarises our scientific methodology and technical approach, explains our decisions, and provides an outlook to the future of IAS monitoring involving citizens and utilising the latest technological advancements. Last but not least we emphasise on software design for reuse, within the domain of IAS monitoring, but also for supporting citizen science apps more generally. Whereas much could already be achieved, many scientific, technical and organizational challenges still remain to be addressed before data can be seamlessly shared and integrated. Here, we particularly highlight issues that emerge in an international setting, which involves many different stakeholders….(More)”.
The Impact of Citizen Environmental Science in the United States
Paper by George Wyeth, Lee C. Paddock, Alison Parker, Robert L. Glicksman and Jecoliah Williams: “An increasingly sophisticated public, rapid changes in monitoring technology, the ability to process large volumes of data, and social media are increasing the capacity for members of the public and advocacy groups to gather, interpret, and exchange environmental data. This development has the potential to alter the government-centric approach to environmental governance; however, citizen science has had a mixed record in influencing government decisions and actions. This Article reviews the rapid changes that are going on in the field of citizen science and examines what makes citizen science initiatives impactful, as well as the barriers to greater impact. It reports on 10 case studies, and evaluates these to provide findings about the state of citizen science and recommendations on what might be done to increase its influence on environmental decisionmaking….(More)”,
Number of fact-checking outlets surges to 188 in more than 60 countries
Mark Stencel at Poynter: “The number of fact-checking outlets around the world has grown to 188 in more than 60 countries amid global concerns about the spread of misinformation, according to the latest tally by the Duke Reporters’ Lab.
Since the last annual fact-checking census in February 2018, we’ve added 39 more outlets that actively assess claims from politicians and social media, a 26% increase. The new total is also more than four times the 44 fact-checkers we counted when we launched our global database and map in 2014.
Globally, the largest growth came in Asia, which went from 22 to 35 outlets in the past year. Nine of the 27 fact-checking outlets that launched since the start of 2018 were in Asia, including six in India. Latin American fact-checking also saw a growth spurt in that same period, with two new outlets in Costa Rica, and others in Mexico, Panama and Venezuela.
The actual worldwide total is likely much higher than our current tally. That’s because more than a half-dozen of the fact-checkers we’ve added to the database since the start of 2018 began as election-related partnerships that involved the collaboration of multiple organizations. And some those election partners are discussing ways to continue or reactivate that work— either together or on their own.
Over the past 12 months, five separate multimedia partnerships enlisted more than 60 different fact-checking organizations and other news companies to help debunk claims and verify information for voters in Mexico, Brazil, Sweden,Nigeria and the Philippines. And the Poynter Institute’s International Fact-Checking Network assembled a separate team of 19 media outlets from 13 countries to consolidate and share their reporting during the run-up to last month’s elections for the European Parliament. Our database includes each of these partnerships, along with several others— but not each of the individual partners. And because they were intentionally short-run projects, three of these big partnerships appear among the 74 inactive projects we also document in our database.
Politics isn’t the only driver for fact-checkers. Many outlets in our database are concentrating efforts on viral hoaxes and other forms of online misinformation — often in coordination with the big digital platforms on which that misinformation spreads.
We also continue to see new topic-specific fact-checkers such as Metafact in Australia and Health Feedback in France— both of which launched in 2018 to focus on claims about health and medicine for a worldwide audience….(More)”.
Citizen, Science, and Citizen Science
Introduction by Shun-Ling and Chen Fa-ti Fan to special issue on citizen science: “The term citizen science has become very popular among scholars as well as the general public, and, given its growing presence in East Asia, it is perhaps not a moment too soon to have a special issue of EASTS on the topic. However, the quick expansion of citizen science, as a notion and a practice, has also spawned a mass of blurred meanings. The term is ill-defined and has been used in diverse ways. To avoid confusion, it is necessary to categorize the various and often ambiguous usages of the term and clarify their meanings.
As in any taxonomy, there are as many typologies as the particular perspectives, parameters, and criteria adopted for classification. There have been helpful attempts at classifying different modes of citizen science (Cooper and Lewenstein 2016; Wiggins and Crowston 2012; Haklay 2012). However, they focused primarily on the different approaches or methods in citizen science. Ottinger’s two categories of citizen science—“scientific authority driven” and “social movement based”—foreground the criteria of action and justification, but they unnecessarily juxtapose science and society; in any case, they may be too general and leaving out too much at the same time.1
In contrast, our classification will emphasize the different conceptions of citizen and citizenship in how we think about citizen science. We believe that this move can help us contextualize the ideas and practices of citizen science in the diverse socio-political conditions found in East Asia and beyond (Leach, Scoones, and Wynne 2005). To explain that point, we’ll begin with a few observations. First, the current discourse on citizen science tends to glide over such concepts as state, citizen, and the public and to assume that the reader will understand what they mean. This confidence originates in part from the fact that the default political framework of the discourse is usually Western (particularly Anglo-American). As a result, one often easily accepts a commonsense notion of participatory liberal democracy as the reference framework. However, one cannot assume that that is the de facto political framework for discussion of citizen science….(More)”.
Crowdsourcing Research Questions? Leveraging the Crowd’s Experiential Knowledge for Problem Finding
Paper by Tiare-Maria Brasseur, Susanne Beck, Henry Sauermann, Marion Poetz: “Recently, both researchers and policy makers have become increasingly interested in involving the general public (i.e., the crowd) in the discovery of new science-based knowledge. There has been a boom of citizen science/crowd science projects (e.g., Foldit or Galaxy Zoo) and global policy aspirations for greater public engagement in science (e.g., Horizon Europe). At the same time, however, there are also criticisms or doubts about this approach. Science is complex and laypeople often do not have the appropriate knowledge base for scientific judgments, so they rely on specialized experts (i.e., scientists) (Scharrer, Rupieper, Stadtler & Bromme, 2017). Given these two perspectives, there is no consensus on what the crowd can do and what only researchers should do in scientific processes yet (Franzoni & Sauermann, 2014). Previous research demonstrates that crowds can be efficiently and effectively used in late stages of the scientific research process (i.e., data collection and analysis). We are interested in finding out what crowds can actually contribute to research processes that goes beyond data collection and analysis. Specifically, this paper aims at providing first empirical insights on how to leverage not only the sheer number of crowd contributors, but also their diversity in experience for early phases of the research process (i.e., problem finding). In an online and field experiment, we develop and test suitable mechanisms for facilitating the transfer of the crowd’s experience into scientific research questions. In doing so, we address the following two research questions: 1. What factors influence crowd contributors’ ability to generate research questions? 2. How do research questions generated by crowd members differ from research questions generated by scientists in terms of quality? There are strong claims about the significant potential of people with experiential knowledge, i.e., sticky problem knowledge derived from one’s own practical experience and practices (Collins & Evans, 2002), to enhance the novelty and relevance of scientific research (e.g., Pols, 2014). Previous evidence that crowds with experiential knowledge (e.g., users in Poetz & Schreier, 2012) or ?outsiders?/nonobvious individuals (Jeppesen & Lakhani, 2010) can outperform experts under certain conditions by having novel perspectives, support the assumption that the participation of non-scientists (i.e., crowd members) in scientific problem-finding might complement scientists’ lack of experiential knowledge. Furthermore, by bringing in exactly these new perspectives, they might help overcome problems of fixation/inflexibility in cognitive-search processes among scientists (Acar & van den Ende, 2016). Thus, crowd members with (higher levels of) experiential knowledge are expected to be superior in identifying very novel and out-of-the-box research problems with high practical relevance, as compared to scientists. However, there are clear reasons to be skeptical: despite their advantage to possess important experiential knowledge, the crowd lacks the scientific knowledge we assume to be required to formulate meaningful research questions. To study exactly how the transfer of crowd members’ experiential knowledge into science can be facilitated, we conducted two experimental studies in context of traumatology (i.e., research on accidental injuries). First, we conducted a large-scale online experiment (N=704) in collaboration with an international crowdsourcing platform to test the effect of two facilitating treatments on crowd members’ ability to formulate real research questions (study 1). We used a 2 (structuring knowledge/no structuring knowledge) x 2 (science knowledge/no science knowledge) between-subject experimental design. Second, we tested the same treatments in the field (study 2), i.e., in a crowdsourcing project in collaboration with LBG Open Innovation in Science Center. We invited patients, care takers and medical professionals (e.g., surgeons, physical therapists or nurses) concerned with accidental injuries to submit research questions using a customized online platform (https://tell-us.online/) to investigate the causal relationship between our treatments and different types and levels of experiential knowledge (N=118). An international jury of experts (i.e., journal editors in the field of traumatology) then assesses the quality of submitted questions (from the online and field experiment) along several quality dimensions (i.e., clarity, novelty, scientific impact, practical impact, feasibility) in an online evaluation process. To assess the net effect of our treatments, we further include a random sample of research questions obtained from early-stage research papers (i.e., conference papers) into the expert evaluation (blind to the source) and compare them with the baseline groups of our experiments. We are currently finalizing the data collection…(More)”.