Paper by T. A. Nelson, F. Goodchild and D. J. Wright: “Science has traditionally been driven by curiosity and followed one goal: the pursuit of truth and the advancement of knowledge. Recently, ethics, empathy, and equity, which we term “the 3Es,” are emerging as new drivers of research and disrupting established practices. Drawing on our own field of GIScience (geographic information science), our goal is to use the geographic approach to accelerate the response to the 3Es by identifying priority issues and research needs that, if addressed, will advance ethical, empathic, and equitable GIScience. We also aim to stimulate similar responses in other disciplines. Organized around the 3Es we discuss ethical issues arising from locational privacy and cartographic integrity, how our ability to build knowledge that will lead to empathy can be curbed by data that lack representativeness and by inadvertent inferential error, and how GIScientists can lead toward equity by supporting social justice efforts and democratizing access to spatial science and its tools. We conclude with a call to action and invite all scientists to join in a fundamentally different science that responds to the 3Es and mobilizes for change by engaging in humility, broadening measures of excellences and success, diversifying our networks, and creating pathways to inclusive education. Science united around the 3Es is the right response to this unique moment where society and the planet are facing a vast array of challenges that require knowledge, truth, and action…(More)”
Opening Up to Open Science
Essay by Chelle Gentemann, Christopher Erdmann and Caitlin Kroeger: “The modern Hippocratic Oath outlines ethical standards that physicians worldwide swear to uphold. “I will respect the hard-won scientific gains of those physicians in whose steps I walk,” one of its tenets reads, “and gladly share such knowledge as is mine with those who are to follow.”
But what form, exactly, should knowledge-sharing take? In the practice of modern science, knowledge in most scientific disciplines is generally shared through peer-reviewed publications at the end of a project. Although publication is both expected and incentivized—it plays a key role in career advancement, for example—many scientists do not take the extra step of sharing data, detailed methods, or code, making it more difficult for others to replicate, verify, and build on their results. Even beyond that, professional science today is full of personal and institutional incentives to hold information closely to retain a competitive advantage.
This way of sharing science has some benefits: peer review, for example, helps to ensure (even if it never guarantees) scientific integrity and prevent inadvertent misuse of data or code. But the status quo also comes with clear costs: it creates barriers (in the form of publication paywalls), slows the pace of innovation, and limits the impact of research. Fast science is increasingly necessary, and with good reason. Technology has not only improved the speed at which science is carried out, but many of the problems scientists study, from climate change to COVID-19, demand urgency. Whether modeling the behavior of wildfires or developing a vaccine, the need for scientists to work together and share knowledge has never been greater. In this environment, the rapid dissemination of knowledge is critical; closed, siloed knowledge slows progress to a degree society cannot afford. Imagine the consequences today if, as in the 2003 SARS disease outbreak, the task of sequencing genomes still took months and tools for labs to share the results openly online didn’t exist. Today’s challenges require scientists to adapt and better recognize, facilitate, and reward collaboration.
Open science is a path toward a collaborative culture that, enabled by a range of technologies, empowers the open sharing of data, information, and knowledge within the scientific community and the wider public to accelerate scientific research and understanding. Yet despite its benefits, open science has not been widely embraced…(More)”
Citizen science and environmental justice: exploring contradictory outcomes through a case study of air quality monitoring in Dublin
Paper by Fiadh Tubridy et al: “Citizen science is advocated as a response to a broad range of contemporary societal and ecological challenges. However, there are widely varying models of citizen science which may either challenge or reinforce existing knowledge paradigms and associated power dynamics. This paper explores different approaches to citizen science in the context of air quality monitoring in terms of their implications for environmental justice. This is achieved through a case study of air quality management in Dublin which focuses on the role of citizen science in this context. The evidence shows that the dominant interpretation of citizen science in Dublin is that it provides a means to promote awareness and behaviour change rather than to generate knowledge and inform new regulations or policies. This is linked to an overall context of technocratic governance and the exclusion of non-experts from decision-making. It is further closely linked to neoliberal governance imperatives to individualise responsibility and promote market-based solutions to environmental challenges. Last, the evidence highlights that this model of citizen science risks compounding inequalities by transferring responsibility and blame for air pollution to those who have limited resources to address it. Overall, the paper highlights the need for critical analysis of the implications of citizen science in different instances and for alternative models of citizen science whereby communities would contribute to setting objectives and determining how their data is used…(More)”.
Time to recognize authorship of open data
Nature Editorial: “At times, it seems there’s an unstoppable momentum towards the principle that data sets should be made widely available for research purposes (also called open data). Research funders all over the world are endorsing the open data-management standards known as the FAIR principles (which ensure data are findable, accessible, interoperable and reusable). Journals are increasingly asking authors to make the underlying data behind papers accessible to their peers. Data sets are accompanied by a digital object identifier (DOI) so they can be easily found. And this citability helps researchers to get credit for the data they generate.
But reality sometimes tells a different story. The world’s systems for evaluating science do not (yet) value openly shared data in the same way that they value outputs such as journal articles or books. Funders and research leaders who design these systems accept that there are many kinds of scientific output, but many reject the idea that there is a hierarchy among them.
In practice, those in powerful positions in science tend not to regard open data sets in the same way as publications when it comes to making hiring and promotion decisions or awarding memberships to important committees, or in national evaluation systems. The open-data revolution will stall unless this changes….
Universities, research groups, funding agencies and publishers should, together, start to consider how they could better recognize open data in their evaluation systems. They need to ask: how can those who have gone the extra mile on open data be credited appropriately?
There will always be instances in which researchers cannot be given access to human data. Data from infants, for example, are highly sensitive and need to pass stringent privacy and other tests. Moreover, making data sets accessible takes time and funding that researchers don’t always have. And researchers in low- and middle-income countries have concerns that their data could be used by researchers or businesses in high-income countries in ways that they have not consented to.
But crediting all those who contribute their knowledge to a research output is a cornerstone of science. The prevailing convention — whereby those who make their data open for researchers to use make do with acknowledgement and a citation — needs a rethink. As long as authorship on a paper is significantly more valued than data generation, this will disincentivize making data sets open. The sooner we change this, the better….(More)”.
Measuring costs and benefits of citizen science
Article by Kathy Tzilivakis: “It’s never been easy to accurately measure the impact of any scientific research, but it’s even harder for citizen science projects, which don’t follow traditional methods. Public involvement places citizen science in a new era of data collection, one that requires a new measurement plan.
As you read this, thousands of ordinary people across Europe are busy tagging, categorizing and counting in the name of science. They may be reporting crop yields, analyzing plastic waste found in nature or monitoring the populations of wildlife. This relatively new method of public participation in scientific enquiry is experiencing a considerable upswing in both quality and scale of projects.
Of course, people have been sharing observations about the natural world for millennia—way before the term “citizen science” appeared on the cover of sociologist Alan Irwin‘s 1995 book “Citizen Science: A Study of People, Expertise, and Sustainable Development. “
Today, citizen science is on the rise with bigger projects that are more ambitious and better networked than ever before. And while collecting seawater samples and photographing wild birds are two well-known examples of citizen science, this is just the tip of the iceberg.
Citizen science is evolving thanks to new data collection techniques enabled by the internet, smartphones and social media. Increased connectivity is encouraging a wide range of observations that can be easily recorded and shared. The reams of crowd-sourced data from members of the public are a boon for researchers working on large-scale and geographically diverse projects. Often it would be too difficult and expensive to obtain this data otherwise.
Both sides win because scientists are helped to collect much better data and an enthusiastic public gets to engage with the fascinating world of science.
But success has been difficult to define, let alone to translate into indicators for assessment. Until now.
A group of EU researchers has taken on the challenge of building the first integrated and interactive platform to measure costs and benefits of citizen science….
“The platform will be very complex but able to capture the characteristics and the results of projects, and measure their impact on several domains like society, economy, environment, science and technology and governance,” said Dr. Luigi Ceccaroni, who is coordinating the Measuring Impact of Citizen Science (MICS) project behind the platform. Currently at the testing stage, the platform is slated to go live before the end of this year….(More)”
Should we get rid of the scientific paper?
Article by Stuart Ritchie: “But although the internet has transformed the way we read it, the overall system for how we publish science remains largely unchanged. We still have scientific papers; we still send them off to peer reviewers; we still have editors who give the ultimate thumbs up or down as to whether a paper is published in their journal.
This system comes with big problems. Chief among them is the issue of publication bias: reviewers and editors are more likely to give a scientific paper a good write-up and publish it in their journal if it reports positive or exciting results. So scientists go to great lengths to hype up their studies, lean on their analyses so they produce “better” results, and sometimes even commit fraud in order to impress those all-important gatekeepers. This drastically distorts our view of what really went on.
There are some possible fixes that change the way journals work. Maybe the decision to publish could be made based only on the methodology of a study, rather than on its results (this is already happening to a modest extent in a few journals). Maybe scientists could just publish all their research by default, and journals would curate, rather than decide, which results get out into the world. But maybe we could go a step further, and get rid of scientific papers altogether.
Scientists are obsessed with papers – specifically, with having more papers published under their name, extending the crucial “publications” section of their CV. So it might sound outrageous to suggest we could do without them. But that obsession is the problem. Paradoxically, the sacred status of a published, peer-reviewed paper makes it harder to get the contents of those papers right.
Consider the messy reality of scientific research. Studies almost always throw up weird, unexpected numbers that complicate any simple interpretation. But a traditional paper – word count and all – pretty well forces you to dumb things down. If what you’re working towards is a big, milestone goal of a published paper, the temptation is ever-present to file away a few of the jagged edges of your results, to help “tell a better story”. Many scientists admit, in surveys, to doing just that – making their results into unambiguous, attractive-looking papers, but distorting the science along the way.
And consider corrections. We know that scientific papers regularly contain errors. One algorithm that ran through thousands of psychology papers found that, at worst, more than 50% had one specific statistical error, and more than 15% had an error serious enough to overturn the results. With papers, correcting this kind of mistake is a slog: you have to write in to the journal, get the attention of the busy editor, and get them to issue a new, short paper that formally details the correction. Many scientists who request corrections find themselves stonewalled or otherwise ignored by journals. Imagine the number of errors that litter the scientific literature that haven’t been corrected because to do so is just too much hassle.
Finally, consider data. Back in the day, sharing the raw data that formed the basis of a paper with that paper’s readers was more or less impossible. Now it can be done in a few clicks, by uploading the data to an open repository. And yet, we act as if we live in the world of yesteryear: papers still hardly ever have the data attached, preventing reviewers and readers from seeing the full picture.
The solution to all these problems is the same as the answer to “How do I organise my journals if I don’t use cornflakes boxes?” Use the internet. We can change papers into mini-websites (sometimes called “notebooks”) that openly report the results of a given study. Not only does this give everyone a view of the full process from data to analysis to write-up – the dataset would be appended to the website along with all the statistical code used to analyse it, and anyone could reproduce the full analysis and check they get the same numbers – but any corrections could be made swiftly and efficiently, with the date and time of all updates publicly logged…(More)”.
Decoding human behavior with big data? Critical, constructive input from the decision sciences
Paper by Konstantinos V. Katsikopoulos and Marc C. Canellas: “Big data analytics employs algorithms to uncover people’s preferences and values, and support their decision making. A central assumption of big data analytics is that it can explain and predict human behavior. We investigate this assumption, aiming to enhance the knowledge basis for developing algorithmic standards in big data analytics. First, we argue that big data analytics is by design atheoretical and does not provide process-based explanations of human behavior; thus, it is unfit to support deliberation that is transparent and explainable. Second, we review evidence from interdisciplinary decision science, showing that the accuracy of complex algorithms used in big data analytics for predicting human behavior is not consistently higher than that of simple rules of thumb. Rather, it is lower in situations such as predicting election outcomes, criminal profiling, and granting bail. Big data algorithms can be considered as candidate models for explaining, predicting, and supporting human decision making when they match, in transparency and accuracy, simple, process-based, domain-grounded theories of human behavior. Big data analytics can be inspired by behavioral and cognitive theory….(More)”.
Opening up Science—to Skeptics
Essay by Rohan R. Arcot and Hunter Gehlbach: “Recently, the soaring trajectory of science skepticism seems to be rivaled only by global temperatures. Empirically established facts—around vaccines, elections, climate science, and the like—face potent headwinds. Despite the scientific consensus on these issues, much of the public remains unconvinced. In turn, science skepticism threatens our health, the health of our democracy, and the health of our planet.
The research community is no stranger to skepticism. Its own members have been questioning the integrity of many scientific findings with particular intensity of late. In response, we have seen a swell of open science norms and practices, which provide greater transparency about key procedural details of the research process, mitigating many research skeptics’ misgivings. These open practices greatly facilitate how science is communicated—but only between scientists.
Given the present historical moment’s critical need for science, we wondered: What if scientists allowed skeptics in the general public to look under the hood at how their studies were conducted? Could opening up the basic ideas of open science beyond scholars help combat the epidemic of science skepticism?
Intrigued by this possibility, we sought a qualified skeptic and returned to Rohan’s father. If we could chaperone someone through a scientific journey—a person who could vicariously experience the key steps along the way—could our openness assuage their skepticism?…(More)”.
Trust the Science But Do Your Research: A Comment on the Unfortunate Revival of the Progressive Case for the Administrative State
Essay by Mark Tushnet: “…offers a critique of one Progressive argument for the administrative state, that it would base policies on what disinterested scientific inquiries showed would best advance the public good and flexibly respond to rapidly changing technological, economic, and social conditions. The critique draws on recent scholarship in the field of Science and Technology Studies, which argues that what counts as a scientific fact is the product of complex social, political, and other processes. The critique is deployed in an analysis of the responses of the U.S. Centers for Disease Control and Food and Drug Administration to some important aspects of the COVD crisis in 2020.
A summary of the overall argument is this: The COVID virus had characteristics that made it exceptionally difficult to develop policies that would significantly limit its spread until a vaccine was available, and some of those characteristics went directly to the claim that the administrative state could respond flexibly to rapidly changing conditions. But, and here is where the developing critique of claims about scientific expertise enters, the relevant administrative agencies were bureaucracies with scientific staff members, and what those bureaucracies regard as “the science” was shaped in part by bureaucratic and political considerations, and the parts that were so shaped were important components of the overall policy response.
Part II describes policy-relevant characteristics of knowledge about the COVID virus and explains why those characteristics made it quite difficult for more than a handful of democratic nations to adopt policies that would effectively limit its penetration of their populations. Part III begins with a short presentation of the aspects of the STS critique of claims about disinterested science that have some bearing on policy responses to the pandemic. It then provides an examination shaped by that critique of the structures of the Food and Drug Administration and the Centers for Disease Control, showing how those structural features contributed to policy failures. Part IV concludes by sketching how the STS critique might inform efforts to reconstruct rather than deconstruct the administrative state, proposing the creation of Citizen Advisory Panels in science-based agencies…(More)”.
Mapping the Demand Side of Computational Social Science for Policy
Report by Alonso Raposo, M., et al: “This report aims at collecting novel and pressing policy issues that can be addressed by Computational Social Science (CSS), an emerging discipline that is rooted in the increasing availability of digital trace data and computational resources and seeks to apply data science methods to social sciences. The questions were sourced from researchers at the European Commission who work at the interface between science and policy and who are well positioned to formulate research questions that are likely to anticipate future policy needs.
The attempt is to identify possible directions for Computational Social Science starting from the demand side, making it an effort to consider not only how science can ultimately provide policy support — “Science for Policy – but also how policymakers can be involved in the process of defining and co-creating the CSS4P agenda from the outset — ‘Policy for Science’. The report is expected to raise awareness on the latest scientific advances in Computational Social Science and on its potential for policy, integrating the knowledge of policymakers and stimulating further questions in the context of future developments of this initiative…(More)”.