Learning to Share: Lessons on Data-Sharing from Beyond Social Media


Paper by CDT: “What role has social media played in society? Did it influence the rise of Trumpism in the U.S. and the passage of Brexit in the UK? What about the way authoritarians exercise power in India or China? Has social media undermined teenage mental health? What about its role in building social and community capital, promoting economic development, and so on?

To answer these and other important policy-related questions, researchers such as academics, journalists, and others need access to data from social media companies. However, this data is generally not available to researchers outside of social media companies and, where it is available, it is often insufficient, meaning that we are left with incomplete answers.

Governments on both sides of the Atlantic have passed or proposed legislation to address the problem by requiring social media companies to provide certain data to vetted researchers (Vogus, 2022a). Researchers themselves have thought a lot about the problem, including the specific types of data that can further public interest research, how researchers should be vetted, and the mechanisms companies can use to provide data (Vogus, 2022b).

For their part, social media companies have sanctioned some methods to share data to certain types of researchers through APIs (e.g., for researchers with university affiliations) and with certain limitations (such as limits on how much and what types of data are available). In general, these efforts have been insufficient. In part, this is due to legitimate concerns such as the need to protect user privacy or to avoid revealing company trade secrets.  But, in some cases, the lack of sharing is due to other factors such as lack of resources or knowledge about how to share data effectively or resistance to independent scrutiny.

The problem is complex but not intractable. In this report, we look to other industries where companies share data with researchers through different mechanisms while also addressing concerns around privacy. In doing so, our analysis contributes to current public and corporate discussions about how to safely and effectively share social media data with researchers. We review experiences based on the governance of clinical trials, electricity smart meters, and environmental impact data…(More)”

Co-Producing Sustainability Research with Citizens: Empirical Insights from Co-Produced Problem Frames with Randomly Selected Citizens


Paper by Mareike Blum: “In sustainability research, knowledge co-production can play a supportive role at the science-policy interface (Norström et al., 2020). However, so far most projects involved stakeholders in order to produce ‘useful knowledge’ for policy-makers. As a novel approach, research projects have integrated randomly selected citizens during the knowledge co-production to make policy advice more reflective of societal perspectives and thereby increase its epistemic quality. Researchers are asked to consider citizens’ beliefs and values and integrate these in their ongoing research. This approach rests on pragmatist philosophy, according to which a joint deliberation on value priorities and anticipated consequences of policy options ideally allows to co-develop sustainable and legitimate policy pathways (Edenhofer & Kowarsch, 2015; Kowarsch, 2016). This paper scrutinizes three promises of involving citizens in the problem framing: (1) creating input legitimacy, (2) enabling social learning among citizens and researchers and (3) resulting in high epistemic quality of the co-produced knowledge. Based on empirical data the first phase of two research projects in Germany were analysed and compared: The Ariadne research project on the German Energy Transition, and the Biesenthal Forest project at the local level in Brandenburg, Germany. We found that despite barriers exist; learning was enabled by confronting researchers with problem perceptions of citizens. The step when researchers interpret and translate problem frames in the follow-up knowledge production is most important to assess learning and epistemic quality…(More)”.

Math for Future Scientists: Require Statistics, Not Calculus


Essay by Robert C. Thornett: “The common requirement to pass calculus in order to major in a science is a killer of students’ dreams. And it unnecessarily limits the pool of future scientists.

Charles Darwin is a classic example of a genius naturalist who was not a natural at math. As a young man, he sailed around the world aboard the HMS Beagle and explored the giant tortoises and iguanas of the Galapagos, the rainforests of Brazil, and the coral reefs of the South Pacific. From these sorts of direct engagements with nature, he developed his theory of evolution, which revolutionized science. But Darwin wrote in his autobiography that after studying math as a young man, he found that “it was repugnant to me.” When statistics stumped Darwin during his experiments investigating the advantages of crossbreeding plants, he called his cousin, the statistician Francis Galton, to try to make sense of the numbers.

Similarly, Thomas Edison said that as a boy he had a “distaste for mathematics.” But this did not stop him from becoming one of the most famous scientific inventors of all time. “I can always hire a mathematician,” said Edison, “but they can’t hire me.” Edison was so interested in chemistry that at the age of 13, when he got a job as a newsboy and concessionaire on the Grand Trunk Railroad, he brought a chemistry set aboard so he could do experiments during layovers. Math and science are distinctly different fields, and a talent for one does not imply a talent for the other.

According to professor emeritus Andrew Hacker of Queens College of the City University of New York, less than five percent of Americans will ever use any higher math at all in their jobs, including not only calculus but algebra, geometry, and trigonometry. And less than one percent will ever use calculus on the job. Born in 1929 and holding a PhD from Princeton, Hacker taught college political science for decades and has also been a math professor. His book The Math Myth: And Other STEM Delusions argues that not only college students but high school students should not be required to take algebra, geometry, trigonometry, or calculus at all. Hacker points out that not passing ninth grade algebra is the foremost academic indicator that a student will drop out of high school.

Before the objections tumble forth, I should emphasize that both Hacker and I like math and neither of us wants to remove all math requirements; we want to improve them. And I believe high school students should be required to study algebra and geometry. But Hacker’s larger argument is that both high schools and colleges should switch to teaching more useful types of math that can help students navigate the real world. He says American schools teach basic arithmetic well up to around middle school, but they stop there when they should continue teaching what he calls “adult arithmetic” or “sophisticated arithmetic” rather than veer off into more abstract types of math…(More)”.

Superhuman science: How artificial intelligence may impact innovation


Working paper by Ajay Agrawal, John McHale, and Alexander Oettl: “New product innovation in fields like drug discovery and material science can be characterized as combinatorial search over a vast range of possibilities. Modeling innovation as a costly multi-stage search process, we explore how improvements in Artificial Intelligence (AI) could affect the productivity of the discovery pipeline in allowing improved prioritization of innovations that flow through that pipeline. We show how AI-aided prediction can increase the expected value of innovation and can increase or decrease the demand for downstream testing, depending on the type of innovation, and examine how AI can reduce costs associated with well-defined bottlenecks in the discovery pipeline. Finally, we discuss the critical role that policy can play to mitigate potential market failures associated with access to and provision of data as well as the provision of training necessary to more closely approach the socially optimal level of productivity enhancing innovations enabled by this technology…(More)”.

Rethinking Intelligence In A More-Than-Human World


Essay by Amanda Rees: “We spend a lot of time debating intelligence — what does it mean? Who has it? And especially lately — can technology help us create or enhance it?

But for a species that relies on its self-declared “wisdom” to differentiate itself from all other animals, a species that consistently defines itself as intelligent and rational, Homo sapiens tends to do some strikingly foolish things — creating the climate crisis, for example, or threatening the survival of our world with nuclear disaster, or creating ever-more-powerful and pervasive algorithms. 

If we are in fact to be “wise,” we need to learn to manage a range of different and potentially existential risks relating to (and often created by) our technological interventions in the bio-social ecologies we inhabit. We need, in short, to rethink what it means to be intelligent. 

Points Of Origin

Part of the problem is that we think of both “intelligence” and “agency” as objective, identifiable, measurable human characteristics. But they’re not. At least in part, both concepts are instead the product of specific historical circumstances. “Agency,” for example, emerges with the European Enlightenment, perhaps best encapsulated in Giovanni Pico della Mirandola’s “Oration on the Dignity of Man.” Writing in the late 15th century, Mirandola revels in the fact that to humanity alone “it is granted to have whatever he chooses, to be whatever he wills. … On man … the Father conferred the seeds of all kinds and the germs of every way of life. Whatever seeds each man cultivates will grow to maturity and bear in him their own fruit.”

In other words, what makes humans unique is their possession of the God-given capacity to exercise free will — to take rational, self-conscious action in order to achieve specific ends. Today, this remains the model of agency that underpins significant and influential areas of public discourse. It resonates strongly with neoliberalist reforms of economic policy, for example, as well as with debates on public health responsibility and welfare spending. 

A few hundred years later, the modern version of “intelligence” appears, again in Europe, where it came to be understood as a capacity for ordered, rational, problem-solving, pattern-recognizing cognition. Through the work of the eugenicist Francis Galton, among others, intelligence soon came to be regarded as an innate quality possessed by individuals to greater or lesser degree, which could be used to sort populations into hierarchies of social access and economic reward…(More)”.

Exhaustive or Exhausting? Evidence on Respondent Fatigue in Long Surveys


Paper by Dahyeon Jeong et al: “Living standards measurement surveys require sustained attention for several hours. We quantify survey fatigue by randomizing the order of questions in 2-3 hour-long in-person surveys. An additional hour of survey time increases the probability that a respondent skips a question by 10-64%. Because skips are more common, the total monetary value of aggregated categories such as assets or expenditures declines as the survey goes on, and this effect is sizeable for some categories: for example, an extra hour of survey time lowers food expenditures by 25%. We find similar effect sizes within phone surveys in which respondents were already familiar with questions, suggesting that cognitive burden may be a key driver of survey fatigue…(More)”.

Community science draws on the power of the crowd


Essay by Amber Dance: “In community science, also called participatory science, non-professionals contribute their time, energy or expertise to research. (The term ‘citizen science’ is also used but can be perceived as excluding non-citizens.)

Whatever name is used, the approach is more popular than ever and even has journals dedicated to it. The number of annual publications mentioning ‘citizen science’ went from 151 in 2015 to more than 640 in 2021, according to the Web of Science database. Researchers from physiologists to palaeontologists to astronomers are finding that harnessing the efforts of ordinary people is often the best route to the answers they seek.

“More and more funding organizations are actually promoting this type of participatory- and citizen-science data gathering,” says Bálint Balázs, managing director of the Environmental Social Science Research Group in Budapest, a non-profit company focusing on socio-economic research for sustainability.

Community science is also a great tool for outreach, and scientists often delight in interactions with amateur researchers. But it’s important to remember that community science is, foremost, a research methodology like any other, with its own requirements in terms of skill and effort.

“To do a good project, it does require an investment in time,” says Darlene Cavalier, founder of SciStarter, an online clearing house that links research-project leaders with volunteers. “It’s not something where you’re just going to throw up a Google form and hope for the best.” Although there are occasions when scientific data are freely and easily available, other projects create significant costs.

No matter what the topic or approach, people skills are crucial: researchers must identify and cultivate a volunteer community and provide regular feedback or rewards. With the right protocols and checks and balances, the quality of volunteer-gathered data often rivals or surpasses that achieved by professionals.

“There is a two-way learning that happens,” says Tina Phillips, assistant director of the Center for Engagement in Science and Nature at Cornell University in Ithaca, New York. “We all know that science is better when there are more voices, more perspectives.”…(More)”

Cloud labs and remote research aren’t the future of science – they’re here


Article by Tom Ireland: “Cloud labs mean anybody, anywhere can conduct experiments by remote control, using nothing more than their web browser. Experiments are programmed through a subscription-based online interface – software then coordinates robots and automated scientific instruments to perform the experiment and process the data. Friday night is Emerald’s busiest time of the week, as scientists schedule experiments to run while they relax with their families over the weekend.

There are still some things robots can’t do, for example lifting giant carboys (containers for liquids) or unwrapping samples sent by mail, and there are a few instruments that just can’t be automated. Hence the people in blue coats, who look a little like pickers in an Amazon warehouse. It turns out that they are, in fact, mostly former Amazon employees.

Plugging an experiment into a browser forces researchers to translate the exact details of every step into unambiguous code

Emerald originally employed scientists and lab technicians to help the facility run smoothly, but they were creatively stifled with so little to do. Poaching Amazon employees has turned out to be an improvement. “We pay them twice what they were getting at Amazon to do something way more fulfilling than stuffing toilet paper into boxes,” says Frezza. “You’re keeping someone’s drug-discovery experiment running at full speed.”

Further south in the San Francisco Bay Area are two more cloud labs, run by the company Strateos. Racks of gleaming life science instruments – incubators, mixers, mass spectrometers, PCR machines – sit humming inside large Perspex boxes known as workcells. The setup is arguably even more futuristic than at Emerald. Here, reagents and samples whizz to the correct workcell on hi-tech magnetic conveyor belts and are gently loaded into place by dextrous robot arms. Researchers’ experiments are “delocalised”, as Strateos’s executive director of operations, Marc Siladi, puts it…(More)”.

Collection of Case Studies of Institutional Adoption of Citizen Science


About TIME4CS : “The first objective was to increase our knowledge about the actions leading to institutional changes in RPOs (which are necessary to promote CS in science and technology) through a complete and up-to-date picture based upon the identification, mapping, monitoring and analysis of ongoing CS practices. To accomplish this objective, we, the TIME4CS project team, have collected and analysed 37 case studies on the institutional adoption of Citizen Science and Open Science around the world, which this article addresses.

For an organisation to open up and accept data and information that was produced outside it, with a different framework for data collection and quality assurance, there are multiple challenges. These include existing practices and procedures, legal obligations, as well as resistance from within due to framing of such action as a threat. Research that was carried out with multiple international case studies (Haklay et al. 2014; GFDRR 2018), demonstrated the importance of different institutional and funding structures needed to enable such activities and the use of the resulting information…(More)”.

Nudging Science Towards Fairer Evaluations: Evidence From Peer Review


Paper by Inna Smirnova, Daniel M. Romero, and Misha Teplitskiy: “Peer review is widely used to select scientific projects for funding and publication, but there is growing evidence that it is biased towards prestigious individuals and institutions. Although anonymizing submissions can reduce prestige bias, many organizations do not implement anonymization, in part because enforcing it can be prohibitively costly. Here, we examine whether nudging but not forcing authors to anonymize their submissions reduces prestige bias. We partnered with IOP Publishing, one of the largest academic publishers, which adopted a policy strongly encouraging authors to anonymize their submissions and staggered the policy rollout across its physics journal portfolio. We examine 156,015 submissions to 57 peer-reviewed journals received between January 2018 and February 2022 and measure author prestige with citations accrued at submission time. Higher prestige first authors were less likely to anonymize. Nevertheless, for low-prestige authors, the policy increased positive peer reviews by 2.4% and acceptance by 5.6%. For middle- and high-prestige authors, the policy decreased positive reviews (1.8% and 1%) and final acceptance (4.6% and 2.2%). The policy did not have unintended consequences on reviewer recruitment or the characteristics of submitting authors. Overall, nudges are a simple, low-cost, and effective method to reduce prestige bias and should be considered by organizations for which enforced-anonymization is impractical…(More)”.