What Is Citizen Science? – A Scientometric Meta-Analysis


Christopher Kullenberg and Dick Kasperowski at PLOS One: “The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health.

Objective

In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms.

Results

Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data….(More)”

Crowdsourcing Diagnosis for Patients With Undiagnosed Illnesses: An Evaluation of CrowdMed


Paper by Ashley N.D Meyer et al in the Journal of Medical Internet Research: ” Background: Despite visits to multiple physicians, many patients remain undiagnosed. A new online program, CrowdMed, aims to leverage the “wisdom of the crowd” by giving patients an opportunity to submit their cases and interact with case solvers to obtain diagnostic possibilities.

Objective: To describe CrowdMed and provide an independent assessment of its impact.

Methods: Patients submit their cases online to CrowdMed and case solvers sign up to help diagnose patients. Case solvers attempt to solve patients’ diagnostic dilemmas and often have an interactive online discussion with patients, including an exchange of additional diagnostic details. At the end, patients receive detailed reports containing diagnostic suggestions to discuss with their physicians and fill out surveys about their outcomes. We independently analyzed data collected from cases between May 2013 and April 2015 to determine patient and case solver characteristics and case outcomes.

Results: During the study period, 397 cases were completed. These patients previously visited a median of 5 physicians, incurred a median of US $10,000 in medical expenses, spent a median of 50 hours researching their illnesses online, and had symptoms for a median of 2.6 years. During this period, 357 active case solvers participated, of which 37.9% (132/348) were male and 58.3% (208/357) worked or studied in the medical industry. About half (50.9%, 202/397) of patients were likely to recommend CrowdMed to a friend, 59.6% (233/391) reported that the process gave insights that led them closer to the correct diagnoses, 57% (52/92) reported estimated decreases in medical expenses, and 38% (29/77) reported estimated improvement in school or work productivity.

Conclusions: Some patients with undiagnosed illnesses reported receiving helpful guidance from crowdsourcing their diagnoses during their difficult diagnostic journeys. However, further development and use of crowdsourcing methods to facilitate diagnosis requires long-term evaluation as well as validation to account for patients’ ultimate correct diagnoses….(More)”

Algorithmic Life: Calculative Devices in the Age of Big Data


Book edited by Louise Amoore and Volha Piotukh: “This book critically explores forms and techniques of calculation that emerge with digital computation, and their implications. The contributors demonstrate that digital calculative devices matter beyond their specific functions as they progressively shape, transform and govern all areas of our life. In particular, it addresses such questions as:

  • How does the drive to make sense of, and productively use, large amounts of diverse data, inform the development of new calculative devices, logics and techniques?
  • How do these devices, logics and techniques affect our capacity to decide and to act?
  • How do mundane elements of our physical and virtual existence become data to be analysed and rearranged in complex ensembles of people and things?
  • In what ways are conventional notions of public and private, individual and population, certainty and probability, rule and exception transformed and what are the consequences?
  • How does the search for ‘hidden’ connections and patterns change our understanding of social relations and associative life?
  • Do contemporary modes of calculation produce new thresholds of calculability and computability, allowing for the improbable or the merely possible to be embraced and acted upon?
  • As contemporary approaches to governing uncertain futures seek to anticipate future events, how are calculation and decision engaged anew?

Drawing together different strands of cutting-edge research that is both theoretically sophisticated and empirically rich, this book makes an important contribution to several areas of scholarship, including the emerging social science field of software studies, and will be a vital resource for students and scholars alike….(More)”

Digital Dividends – World Development Report 2016


World Bank Report: “Digital technologies have spread rapidly in much of the world. Digital dividends—the broader development benefits from using these technologies—have lagged behind. In many instances digital technologies have boosted growth, expanded opportunities, and improved service delivery. Yet their aggregate impact has fallen short and is unevenly distributed. For digital technologies to benefit everyone everywhere requires closing the remaining digital divide, especially in internet access. But greater digital adoption will not be enough. To get the most out of the digital revolution, countries also need to work on the “analog complements”—by strengthening regulations that ensure competition among businesses, by adapting workers’ skills to the demands of the new economy, and by ensuring that institutions are accountable…..

Engendering control: The gap between institutions and technology The internet was expected to usher in a new era of accountability and political empowerment, with citizens participating in policy making and forming self-organized virtual communities to hold government to account. These hopes have been largely unmet. While the internet has made many government functions more efficient and convenient, it has generally had limited impact on the most protracted problems—how to improve service provider accountability (principal-agent problems) and how to broaden public involvement and give greater voice to the poor and disadvantaged (collective action problems).

Whether citizens can successfully use the internet to raise the accountability of service providers depends on the context. Most important is the strength of existing accountability relationships between policy makers and providers, as discussed in the 2004 World Development Report, Making Services Work for Poor People. An examination of seventeen digital engagement initiatives for this Report finds that of nine cases in which citizen engagement involved a partnership between civil society organizations (CSOs) and government, three were successful (table O.2). Of eight cases that did not involve a partnership, most failed. This suggests that, although collaboration with government is not a sufficient condition for success, it may well be a necessary one.

Another ingredient for success is effective offline mobilization, particularly because citizen uptake of the digital channels was low in most of the cases. For example, Maji Matone, which facilitates SMS-based feedback about rural water supply problems in Tanzania, received only 53 SMS messages during its first six months of operation, far less than the initial target of 3,000, and was then abandoned. Political participation and engagement of the poor has remained rare, while in many countries the internet has disproportionately benefited political elites and increased the governments’ capacity to influence social and political discourse. Digital technologies have sometimes increased voting overall, but this has not necessarily resulted in more informed or more representative voting. In the Brazilian state of Rio Grande do Sul, online voting increased voter turnout by 8 percentage points, but online voters were disproportionately wealthier and more educated (fi gure O.19). Even in developed countries, engaging citizens continues to be a challenge. Only a small, unrepresentative subset of the population participates, and it is often difficult to sustain citizen engagement. There is no agreement among social scientists on whether the internet disproportionately empowers citizens or political elites, whether it increases polarization, or whether it deepens or weakens social capital, in some cases even facilitating organized violence. The use of technology in governments tends to be successful when it addresses fairly straightforward information and monitoring problems. For more demanding challenges, such as better management of providers or giving citizens

There is no agreement among social scientists on whether the internet disproportionately empowers citizens or political elites, whether it increases polarization, or whether it deepens or weakens social capital, in some cases even facilitating organized violence. The use of technology in governments tends to be successful when it addresses fairly straightforward information and monitoring problems. For more demanding challenges, such as better management of providers or giving citizens greater voice, technology helps only when governments are already responsive. The internet will thus often reinforce rather than replace existing accountability relationships between governments and citizens, including giving governments more capacity for surveillance and control (box O.6). Closing the gap between changing technology and unchanging institutions will require initiatives that strengthen the transparency and accountability of governments….(More)”

Can crowdsourcing decipher the roots of armed conflict?


Stephanie Kanowitz at GCN: “Researchers at Pennsylvania State University and the University of Texas at Dallas are proving that there’s accuracy, not just safety, in numbers. The Correlates of War project, a long-standing effort that studies the history of warfare, is now experimenting with crowdsourcing as a way to more quickly and inexpensively create a global conflict database that could help explain when and why countries go to war.

The goal is to facilitate the collection, dissemination and use of reliable data in international relations, but a byproduct has emerged: the development of technology that uses machine learning and natural language processing to efficiently, cost-effectively and accurately create databases from news articles that detail militarized interstate disputes.

The project is in its fifth iteration, having released the fourth set of Militarized Dispute (MID) Data in 2014. To create those earlier versions, researchers paid subject-matter experts such as political scientists to read and hand code newswire articles about disputes, identifying features of possible militarized incidents. Now, however, they’re soliciting help from anyone and everyone — and finding the results are much the same as what the experts produced, except the results come in faster and with significantly less expense.

As news articles come across the wire, the researchers pull them and formulate questions about them that help evaluate the military events. Next, the articles and questions are loaded onto the Amazon Mechanical Turk, a marketplace for crowdsourcing. The project assigns articles to readers, who typically spend about 10 minutes reading an article and responding to the questions. The readers submit the answers to the project researchers, who review them. The project assigns the same article to multiple workers and uses computer algorithms to combine the data into one annotation.

A systematic comparison of the crowdsourced responses with those of trained subject-matter experts showed that the crowdsourced work was accurate for 68 percent of the news reports coded. More important, the aggregation of answers for each article showed that common answers from multiple readers strongly correlated with correct coding. This allowed researchers to easily flag the articles that required deeper expert involvement and process the majority of the news items in near-real time and at limited cost….(more)”

Humanity 360: World Humanitarian Data and Trends 2015


OCHA: “WORLD HUMANITARIAN DATA AND TRENDS

Highlights major trends, challenges and opportunities in the nature of humanitarian crises, showing how the humanitarian landscape is evolving in a rapidly changing world.

EXPLORE...

LEAVING NO ONE BEHIND: HUMANITARIAN EFFECTIVENESS IN THE AGE OF THE SUSTAINABLE DEVELOPMENT GOALS

Exploring what humanitarian effectiveness means in today’s world ‐ better meeting the needs of people in crisis, better moving people out of crisis.

EXPLORE

TOOLS FOR DATA COORDINATION AND COLLECTION

 

How Much Development Data Is Enough?


Keith D. Shepherd at Project Syndicate: “Rapid advances in technology have dramatically lowered the cost of gathering data. Sensors in space, the sky, the lab, and the field, along with newfound opportunities for crowdsourcing and widespread adoption of the Internet and mobile telephones, are making large amounts of information available to those for whom it was previously out of reach. A small-scale farmer in rural Africa, for example, can now access weather forecasts and market prices at the tap of a screen.

This data revolution offers enormous potential for improving decision-making at every level – from the local farmer to world-spanning development organizations. But gathering data is not enough. The information must also be managed and evaluated – and doing this properly can be far more complicated and expensive than the effort to collect it. If the decisions to be improved are not first properly identified and analyzed, there is a high risk that much of the collection effort could be wasted or misdirected.

This conclusion is itself based on empirical analysis. The evidence is weak, for example, that monitoring initiatives in agriculture or environmental management have had a positive impact. Quantitative analysis of decisions across many domains, including environmental policy, business investments, and cyber security, has shown that people tend to overestimate the amount of data needed to make a good decision or misunderstand what type of data are needed.

Furthermore, grave errors can occur when large data sets are mined using machine algorithms without having first having properly examined the decision that needs to be made. There are many examples of cases in which data mining has led to the wrong conclusion – including in medical diagnoses or legal cases – because experts in the field were not consulted and critical information was left out of the analysis.

Decision science, which combines understanding of behavior with universal principles of coherent decision-making, limits these risks by pairing empirical data with expert knowledge. If the data revolution is to be harnessed in the service of sustainable development, the best practices of this field must be incorporated into the effort.

The first step is to identify and frame frequently recurring decisions. In the field of development, these include large-scale decisions such as spending priorities – and thus budget allocations – by governments and international organizations. But it also includes choices made on a much smaller scale: farmers pondering which crops to plant, how much fertilizer to apply, and when and where to sell their produce.

The second step is to build a quantitative model of the uncertainties in such decisions, including the various triggers, consequences, controls, and mitigants, as well as the different costs, benefits, and risks involved. Incorporating – rather than ignoring – difficult-to-measure, highly uncertain factors leads to the best decisions…..

The third step is to compute the value of obtaining additional information – something that is possible only if the uncertainties in all of the variables have been quantified. The value of information is the amount a rational decision-maker would be willing to pay for it. So we need to know where additional data will have value for improving a decision and how much we should spend to get it. In some cases, no further information may be needed to make a sound decision; in others, acquiring further data could be worth millions of dollars….(More)”

Initial Conditions Matter: Social Capital and Participatory Development


Paper by Lisa A. Cameron et al: “Billions of dollars have been spent on participatory development programs in the developing world. These programs give community members an active decision-making role. Given the emphasis on community involvement, one might expect that the effectiveness of this approach would depend on communities’ pre-existing social capital stocks. Using data from a large randomised field experiment of Community-Led Total Sanitation in Indonesia, we find that villages with high initial social capital built toilets and reduced open defecation, resulting in substantial health benefits. In villages with low initial stocks of social capital, the approach was counterproductive – fewer toilets were built than in control communities and social capital suffered….(More)”

Privacy, security and data protection in smart cities: a critical EU law perspective


CREATe Working Paper by Lilian Edwards: “Smart cities” are a buzzword of the moment. Although legal interest is growing, most academic responses at least in the EU, are still from the technological, urban studies, environmental and sociological rather than legal, sectors2 and have primarily laid emphasis on the social, urban, policing and environmental benefits of smart cities, rather than their challenges, in often a rather uncritical fashion3 . However a growing backlash from the privacy and surveillance sectors warns of the potential threat to personal privacy posed by smart cities . A key issue is the lack of opportunity in an ambient or smart city environment for the giving of meaningful consent to processing of personal data; other crucial issues include the degree to which smart cities collect private data from inevitable public interactions, the “privatisation” of ownership of both infrastructure and data, the repurposing of “big data” drawn from IoT in smart cities and the storage of that data in the Cloud.

This paper, drawing on author engagement with smart city development in Glasgow as well as the results of an international conference in the area curated by the author, argues that smart cities combine the three greatest current threats to personal privacy, with which regulation has so far failed to deal effectively; the Internet of Things(IoT) or “ubiquitous computing”; “Big Data” ; and the Cloud. While these three phenomena have been examined extensively in much privacy literature (particularly the last two), both in the US and EU, the combination is under-explored. Furthermore, US legal literature and solutions (if any) are not simply transferable to the EU because of the US’s lack of an omnibus data protection (DP) law. I will discuss how and if EU DP law controls possible threats to personal privacy from smart cities and suggest further research on two possible solutions: one, a mandatory holistic privacy impact assessment (PIA) exercise for smart cities: two, code solutions for flagging the need for, and consequences of, giving consent to collection of data in ambient environments….(More)

Social Media for Government Services


Book edited by Surya Nepal, Cécile Paris and Dimitrios Georgakopoulos: “This book highlights state-of-the-art research, development and implementation efforts concerning social media in government services, bringing together researchers and practitioners in a number of case studies. It elucidates a number of significant challenges associated with social media specific to government services, such as:  benefits and methods of assessing; usability and suitability of tools, technologies and platforms; governance policies and frameworks; opportunities for new services; integrating social media with organisational business processes; and specific case studies. The book also highlights the range of uses and applications of social media in the government domain, at both local and federal levels. As such, it offers a valuable resource for a broad readership including academic researchers, practitioners in the IT industry, developers, and government policy- and decision-makers….(More)