The coloniality of collaboration: sources of epistemic obedience in data-intensive astronomy in Chile


Paper by Sebastián Lehuedé: “Data collaborations have gained currency over the last decade as a means for data- and skills-poor actors to thrive as a fourth paradigm takes hold in the sciences. Against this backdrop, this article traces the emergence of a collaborative subject position that strives to establish reciprocal and technical-oriented collaborations so as to catch up with the ongoing changes in research.

Combining insights from the modernity/coloniality group, political theory and science and technology studies, the article argues that this positionality engenders epistemic obedience by bracketing off critical questions regarding with whom and for whom knowledge is generated. In particular, a dis-embedding of the data producers, the erosion of local ties, and a data conformism are identified as fresh sources of obedience impinging upon the capacity to conduct research attuned to the needs and visions of the local context. A discursive-material analysis of interviews and field notes stemming from the case of astronomy data in Chile is conducted, examining the vision of local actors aiming to gain proximity to the mega observatories producing vast volumes of data in the Atacama Desert.

Given that these observatories are predominantly under the control of organisations from the United States and Europe, the adoption of a collaborative stance is now seen as the best means to ensure skills and technology transfer to local research teams. Delving into the epistemological dimension of data colonialism, this article warns that an increased emphasis on collaboration runs the risk of reproducing planetary hierarchies in times of data-intensive research….(More)”.

New knowledge environments. On the possibility of a citizen social science.


Article by Joseph Perelló: “Citizen science is in a process of consolidation, with a wide variety of practices and perspectives. Social sciences and humanities occupy a small space despite the obvious social dimension of citizen science. In this sense, citizen social science can enrich the concept of citizen science both because the research objective can also be of a social nature and because it provides greater reflection on the active participation of individuals, groups, or communities in research projects. Based on different experiences, this paper proposes that citizen social science should have the capacity to empower participants and provide them with skills to promote collective actions or public policies based on a co-created knowledge.

Citizen science is commonly recognised as the participation of the public in scientific research (Vohland et al., 2021). It has been promoted as a way to collect massive amounts of data and accelerate its processing, while also raising awareness and spreading knowledge and a better understanding of both scientific methods and the social relevance of results (Parrish et al., 2019). Some researchers support the idea of maintaining the generality and vagueness of the term citizen science (Auerbach et al., 2019), due to the youth of the discipline and the different ways it can be understood (Haklay et al., 2020). Such diversity can be considered positively, as a way to enrich citizen science and, more generally, as a catalyst for the emergence of trans-disciplinary and transformative science.

The sociologist Alan Irwin, one of the authors to whom we owe the concept, already said over 25 years ago: «Citizen Science evokes a science which assists the needs and concerns of citizens» (Irwin, 1995, p. xi). The book argues that citizens can create reliable knowledge. However, decades later, the number of contributions using the term citizen science in social sciences and humanities is scarce, smaller than the number of items published in environmental sciences or biology, which predominate in the field (Kullenberg & Kasperowski, 2016). Nevertheless, there is a growing consensus that social sciences and humanities are necessary for citizen science to reach maturity, both so that the object of study can also be of a social nature, and also so that these disciplines can provide a more elaborate reflection on participation in citizen science projects (Tauginienė et al., 2020)….(More)”.

Seek diversity to solve complexity


Katrin Prager at Nature: “As a social scientist, I know that one person cannot solve a societal problem on their own — and even a group of very intelligent people will struggle to do it. But we can boost our chances of success if we ensure not only that the team members are intelligent, but also that the team itself is highly diverse.

By ‘diverse’ I mean demographic diversity encompassing things such as race, gender identity, class, ethnicity, career stage and age, and cognitive diversity, including differences in thoughts, insights, disciplines, perspectives, frames of reference and thinking styles. And the team needs to be purposely diverse instead of arbitrarily diverse.

In my work I focus on complex world problems, such as how to sustainably manage our natural resources and landscapes, and I’ve found that it helps to deliberately assemble diverse teams. This effort requires me to be aware of the different ways in which people can be diverse, and to reflect on my own preferences and biases. Sometimes the teams might not be as diverse as I’d like. But I’ve found that making the effort not only to encourage diversity, but also to foster better understanding between team members reaps dividends….(more)”

Future Directions for Citizen Science and Public Policy


Open Access Book by The Centre for Science and Policy: “…The OED tells us that citizen science is “scientific work undertaken by members of the general public, often in collaboration with or under the direction of professional scientists and scientific institutions.” However, even this definition raises many questions for policy makers trying to figure out how they might make use of it: “What is the difference between a volunteer in a scientific study and a citizen scientist?” they might ask. “Are all forms of public engagement with science considered citizen science?” or “What does it look like in practice?” – or even “Why do I need to bother engaging citizen science at all?”

This collection of essays presents a range of perspectives on these questions, and we hope it will encourage greater use of citizen science by governments. The authors have been brought together by the Centre for
Science and Policy (CSaP) through a series of seminars, lectures and an online conference. Three observations were made time and again:

  • First, there has been an extraordinary flourishing of citizen science during the past two decades. Huge numbers have participated in projects ranging from spotting patterns in protein structures to monitoring local air pollution; from garden bird surveys to deciphering the handwritten notes from the archives of philosophers; and from tracing radioactive contamination to spotting new planets in distant galaxies.
  • Second, there is a growing imperative in government to find new ways to involve citizens as partners in the development and delivery of policy.
  • Third, that while public funds have supported the expansion of citizen science’s contributions to scientific research, there have been surprisingly few experiments drawing on citizen science to contribute to the business of government itself…(More)”

Are we all social scientists now? The rise of citizen social science raises more questions about social science than it answers


Blog by Alexandra Albert: “…In many instances people outside of the academy can and do, do social research even when they do not consider what they are doing to be social research, since that is perceived to be the preserve of ‘experts’. What is it about social science that makes it a skilful and expert activity, and how or why is it practiced in a way that makes it difficult to do? CSS produces tensions between the ideals of inclusion of social actors in the generation of information about the everyday, and the notion that many participants do not necessarily feel entitled, or empowered, to participate in the analysis of this information, or in the interpretation of what it means. For example, in the case of the Empty Houses project, set up to explore some of these issues discussed here in more detail, some participants suggested they did not feel comfortable reporting on empty houses because they found them hard to identify and assumed that some prior knowledge or ‘expertise’ was required. CSS is the perfect place to interrogate these tensions since it challenges the closed nature of social science.

Second, CSS blurs the roles between researchers and researched, creating new responsibilities for participants and researchers alike. A notable distinction between expert and non-expert in social science research is the critique of the approach and the interpretation or analysis of the data. However, the way that traditional social science is done, with critical analysis being the preserve of the trained expert, means that many participants do not feel that it is their role to do the analysis. Does the professionalisation of observational techniques constitute a different category of sociological data that means that people need to be trained in formal and distinct sociological ways of collecting and analysing data? This is a challenge for research design and execution in CSS, and the potentially new perspectives that participating in CSS can engender.

Third, in addressing social worlds, CSS questions whether such observations are just a regular part of people’s everyday lives, or whether they entail a more active form of practice in observing everyday life. In this sense, what does it really mean to participate? Is there a distinction between ‘active’ and ‘passive’ observation? Arguably participating in a project is never just about this – it’s more of a conscious choice, and therefore, in some respects, a burden of some sort. This further raises the issue of how to appropriately compensate participants for their time and energy, potentially as co-researchers in a project and co-authors on papers?

Finally, while CSS can rearrange the power dynamics of citizenship, research and knowing, narratives of ‘duty’ to take part, and to ‘do your bit’, necessarily place a greater burden on the individual and raise questions about the supposed emancipatory potential of participatory methods such as CSS….(More)”

Who’s Afraid of Big Numbers?


Aiyana Green and Steven Strogatz at the New York Times: “Billions” and “trillions” seem to be an inescapable part of our conversations these days, whether the subject is Jeff Bezos’s net worth or President Biden’s proposed budget. Yet nearly everyone has trouble making sense of such big numbers. Is there any way to get a feel for them? As it turns out, there is. If we can relate big numbers to something familiar, they start to feel much more tangible, almost palpable.

For example, consider Senator Bernie Sanders’s signature reference to “millionaires and billionaires.” Politics aside, are these levels of wealth really comparable? Intellectually, we all know that billionaires have a lot more money than millionaires do, but intuitively it’s hard to feel the difference, because most of us haven’t experienced what it’s like to have that much money.

In contrast, everyone knows what the passage of time feels like. So consider how long it would take for a million seconds to tick by. Do the math, and you’ll find that a million seconds is about 12 days. And a billion seconds? That’s about 32 years. Suddenly the vastness of the gulf between a million and a billion becomes obvious. A million seconds is a brief vacation; a billion seconds is a major fraction of a lifetime.

Comparisons to ordinary distances provide another way to make sense of big numbers. Here in Ithaca, we have a scale model of the solar system known as the Sagan Walk, in which all the planets and the gaps between them are reduced by a factor of five billion. At that scale, the sun becomes the size of a serving plate, Earth is a small pea and Jupiter is a brussels sprout. To walk from Earth to the sun takes just a few dozen footsteps, whereas Pluto is a 15-minute hike across town. Strolling through the solar system, you gain a visceral understanding of astronomical distances that you don’t get from looking at a book or visiting a planetarium. Your body grasps it even if your mind cannot….(More)”.

Scientific publishing’s new weapon for the next crisis: the rapid correction


Gideon Meyerowitz-Katz and James Heathers at STATNews: “If evidence of errors does emerge, the process for correcting or withdrawing a paper tends to be alarmingly long. Late last year, for example, David Cox, the IBM director of the MIT-IBM Watson AI Lab, discovered that his name was included as an author on two papers he had never written. After he wrote to the journals involved, it took almost three months for them to remove his name and the papers themselves. In cases of large-scale research fraud, correction times can be measured in years.

Imagine now that the issue with a manuscript is not a simple matter of retracting a fraudulent paper, but a more complex methodological or statistical problem that undercuts the study’s conclusions. In this context, requests for clarification — or retraction — can languish for years. The process can outlast both the tenure of the responsible editor, resetting the clock on the entire ordeal, or the journal itself can cease publication, leaving an erroneous article in the public domain without oversight, forever….

This situation must change, and change quickly. Any crisis that requires scientific information in a hurry will produce hurried science, and hurried science often includes miscalculated analyses, poor experimental design, inappropriate statistical models, impossible numbers, or even fraud. Having the agility to produce and publicize work like this without having the ability to correct it just as quickly is a curiously persistent oversight in the global scientific enterprise. If corrections occur only long after the research has already been used to treat people across the world, what use are they at all?

There are some small steps in the right direction. The open-source website PubPeer aggregates formal scientific criticism, and when shoddy research makes it into the literature, hordes of critics may leave comments and questions on the site within hours. Twitter, likewise, is often abuzz with spectacular scientific critiques almost as soon as studies go up online.

But these volunteer efforts are not enough. Even when errors are glaring and obvious, the median response from academic journals is to deal with them grudgingly or not at all. Academia in general takes a faintly disapproving tone of crowd-sourced error correction, ignoring the fact that it is often the only mechanism that exists to do this vital work.

Scientific publishing needs to stop treating error-checking as a slightly inconvenient side note and make it a core part of academic research. In a perfect world, entire departmental sections would be dedicated to making sure that published research is correct and reliable. But even a few positions would be a fine start. Young researchers could be given kudos not just for every citation in their Google scholar profile but also for every post-publication review they undertake….(More)”

Examining the Intersection of Behavioral Science and Advocacy


Introduction to Special Collection of the Behavioral Scientist by Cintia Hinojosa and Evan Nesterak: “Over the past year, everyone’s lives have been touched by issues that intersect science and advocacy—the pandemic, climate change, police violence, voting, protests, the list goes on. 

These issues compel us, as a society and individuals, toward understanding. We collect new data, design experiments, test our theories. They also inspire us to examine our personal beliefs and values, our roles and responsibilities as individuals within society. 

Perhaps no one feels these forces more than social and behavioral scientists. As members of fields dedicated to the study of social and behavioral phenomena, they are in the unique position of understanding these issues from a scientific perspective, while also navigating their inevitable personal impact. This dynamic brings up questions about the role of scientists in a changing world. To what extent should they engage in advocacy or activism on social and political issues? Should they be impartial investigators, active advocates, something in between? 

t also raises other questions, like does taking a public stance on an issue affect scientific integrity? How should scientists interact with those setting policies? What happens when the lines between an evidence-based stance and a political position become blurred? What should scientists do when science itself becomes a partisan issue? 

To learn more about how social and behavioral scientists are navigating this terrain, we put out a call inviting them to share their ideas, observations, personal reflections, and the questions they’re grappling with. We gave them 100-250 words to share what was on their mind. Not easy for such a complex and consequential topic.

The responses, collected and curated below, revealed a number of themes, which we’ve organized into two parts….(More)”.

Is It Time for a U.S. Department of Science?



Essay by Anthony Mills: “The Biden administration made history earlier this year by elevating the director of the Office of Science and Technology Policy to a cabinet-level post. There have long been science advisory bodies within the White House, and there are a number of executive agencies that deal with science, some of them cabinet-level. But this will be the first time in U.S. history that the president’s science advisor will be part of his cabinet.

It is a welcome effort to restore the integrity of science, at a moment when science has been thrust onto the center-stage of public life — as something indispensable to political decision-making as well as a source of controversy and distrust. Some have urged the administration to go even further, calling for the creation of a new federal department of science. Such calls to centralize science have a long history, and have grown louder during the coronavirus pandemic, spurred by our government’s haphazard response.

But more centralization is not the way to restore the integrity of science. Centralization has its place, especially during national emergencies. Too much of it, however, is bad for science. As a rule, science flourishes in a decentralized research environment, which balances the need for public support, effective organization, and political accountability with scientific independence and institutional diversity. The Biden administration’s move is welcome. But there is risk in what it could lead to next: an American Ministry of Science. And there is an opportunity to create a needed alternative….(More)”.

The replication crisis won’t be solved with broad brushstrokes



David Peterson at Nature: “Alarm about a ‘replication crisis’ launched a wave of projects that aimed to quantitatively evaluate scientific reproducibility: statistical analyses, mass replications and surveys. Such efforts, collectively called metascience, have grown into a social movement advocating broad reforms: open-science mandates, preregistration of experiments and new incentives for careful research. It has drawn attention to the need for improvements, and caused rancour.

Philosophers, historians and sociologists no longer accept a single, unified definition of science. Instead, they document how scientists in different fields have developed unique practices of producing, communicating and evaluating evidence, guided loosely by a set of shared values. However, this diversity and underlying scholarship are often overlooked by metascience activists.

Over the past three years, Aaron Panofsky, a sociologist at the University of California, Los Angeles, and I have interviewed 60 senior biologists, chemists, geologists and physicists who are reviewing editors at Science, plus another 83 scientists seeking science-wide reforms. These highly recognized researchers saw growing interest in making science more open and robust — but also expressed scepticism.

Senior researchers bristled at the idea that their fields were in ‘crisis’, and suspected that activists were seeking recognition for themselves. A frustrated biologist argued that people running mass replication studies “were not motivated to find reproducibility” and benefited from finding it lacking. Others said metascientists dismissed replication work done to further a line of research rather than assess the state of the literature. Another saw data deposition as a frustrating, externally imposed mandate: “We’re already drowning in all the bureaucratic crap.”

Even those who acknowledged the potential value of reforms, such as those for data sharing, felt that there was no discussion about the costs. “If you add up all of the things that only take ten minutes, it’s a huge chunk of your day.”…(More)”.