A.I.-Generated Garbage Is Polluting Our Culture


Article by Eric Hoel: “Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.

Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.

study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.

Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?)…(More)”.

Citizen silence: Missed opportunities in citizen science


Paper by Damon M Hall et al: “Citizen science is personal. Participation is contingent on the citizens’ connection to a topic or to interpersonal relationships meaningful to them. But from the peer-reviewed literature, scientists appear to have an acquisitive data-centered relationship with citizens. This has spurred ethical and pragmatic criticisms of extractive relationships with citizen scientists. We suggest five practical steps to shift citizen-science research from extractive to relational, reorienting the research process and providing reciprocal benefits to researchers and citizen scientists. By virtue of their interests and experience within their local environments, citizen scientists have expertise that, if engaged, can improve research methods and product design decisions. To boost the value of scientific outputs to society and participants, citizen-science research teams should rethink how they engage and value volunteers…(More)”.

Bring on the Policy Entrepreneurs


Article by Erica Goldman: “Teaching early-career researchers the skills to engage in the policy arena could prepare them for a lifetime of high-impact engagement and invite new perspectives into the democratic process.

In the first six months of the COVID-19 pandemic, the scientific literature worldwide was flooded with research articles, letters, reviews, notes, and editorials related to the virus. One study estimates that a staggering 23,634 unique documents were published between January 1 and June 30, 2020, alone.

Making sense of that emerging science was an urgent challenge. As governments all over the world scrambled to get up-to-date guidelines to hospitals and information to an anxious public, Australia stood apart in its readiness to engage scientists and decisionmakers collaboratively. The country used what was called a “living evidence” approach to synthesizing new information, making it available—and helpful—in real time.

Each week during the pandemic, the Australian National COVID‑19 Clinical Evidence Taskforce came together to evaluate changes in the scientific literature base. They then spoke with a single voice to the Australian clinical community so clinicians had rapid, evidence-based, and nationally agreed-upon guidelines to provide the clarity they needed to care for people with COVID-19.

This new model for consensus-aligned, evidence-based decisionmaking helped Australia navigate the pandemic and build trust in the scientific enterprise, but it did not emerge overnight. It took years of iteration and effort to get the living evidence model ready to meet the moment; the crisis of the pandemic opened a policy window that living evidence was poised to surge through. Australia’s example led the World Health Organization and the United Kingdom’s National Institute for Health and Care Excellence to move toward making living evidence models a pillar of decisionmaking for all their health care guidelines. On its own, this is an incredible story, but it also reveals a tremendous amount about how policies get changed…(More)”.

Meta to shut off data access to journalists


Article by Sara Fischer: “Meta plans to officially shutter CrowdTangle, the analytics tool widely used by journalists and researchers to see what’s going viral on Facebook and Instagram, the company’s president of global affairs Nick Clegg told Axios in an interview.

Why it matters: The company plans to instead offer select researchers access to a set of new data tools, but news publishers, journalists or anyone with commercial interests will not be granted access to that data.

The big picture: The effort comes amid a broader pivot from Meta away from news and politics and more toward user-generated viral videos.

  • Meta acquired CrowdTangle in 2016 at a time when publishers were heavily reliant on the tech giant for traffic.
  • In recent years, it’s stopped investing in the tool, making it less reliable.

The new research tools include Meta’s Content Library, which it launched last year, and an API, or backend interface used by developers.

  • Both tools offer researchers access to huge swaths of data from publicly accessible content across Facebook and Instagram.
  • The tools are available in 180 languages and offer global data.
  • Researchers must apply for access to those tools through the Inter-university Consortium for Political and Social Research at the University of Michigan, which will vet their requests…(More)”

The Dark World of Citation Cartels


Article by Domingo Docampo: “In the complex landscape of modern academe, the maxim “publish or perish” has been gradually evolving into a different mantra: “Get cited or your career gets blighted.” Citations are the new academic currency, and careers now firmly depend on this form of scholarly recognition. In fact, citation has become so important that it has driven a novel form of trickery: stealth networks designed to manipulate citations. Researchers, driven by the imperative to secure academic impact, resort to forming citation rings: collaborative circles engineered to artificially boost the visibility of their work. In doing so, they compromise the integrity of academic discourse and undermine the foundation of scholarly pursuit. The story of the modern “citation cartel” is not just a result of publication pressure. The rise of the mega-journal also plays a role, as do predatory journals and institutional efforts to thrive in global academic rankings.

Over the past decade, the landscape of academic research has been significantly altered by the sheer number of scholars engaging in scientific endeavors. The number of scholars contributing to indexed publications in mathematics has doubled, for instance. In response to the heightened demand for space in scientific publications, a new breed of publishing entrepreneur has seized the opportunity, and the result is the rise of mega-journals that publish thousands of articles annually. Mathematics, an open-access journal produced by the Multidisciplinary Digital Publishing Institute, published more than 4,763 articles in 2023, making up 9.3 percent of all publications in the field, according to the Web of Science. It has an impact factor of 2.4 and an article-influence measure of just 0.37, but, crucially, it is indexed with Clarivate’s Web of Science, Elsevier’s Scopus, and other indexers, which means its citations count toward a variety of professional metrics. (By contrast, the Annals of Mathematics, published by Princeton University, contained 22 articles last year, and has an impact factor of 4.9 and an article-influence measure of 8.3.)..(More)”

A Plan to Develop Open Science’s Green Shoots into a Thriving Garden


Article by Greg Tananbaum, Chelle Gentemann, Kamran Naim, and Christopher Steven Marcum: “…As it’s moved from an abstract set of principles about access to research and data into the realm of real-world activities, the open science movement has mirrored some of the characteristics of the open source movement: distributed, independent, with loosely coordinated actions happening in different places at different levels. Globally, many things are happening, often disconnected, but still interrelated: open science has sowed a constellation of thriving green shoots, not quite yet a garden, but all growing rapidly on arable soil.

Streamlining research processes, reducing duplication of efforts, and accelerating scientific discoveries could ensure that the fruits of open science processes and products are more accessible and equitably distributed.

It is now time to consider how much faster and farther the open science movement could go with more coordination. What efficiencies might be realized if disparate efforts could better harmonize across geographies, disciplines, and sectors? How would an intentional, systems-level approach to aligning incentives, infrastructure, training, and other key components of a rationally functioning research ecosystem advance the wider goals of the movement? Streamlining research processes, reducing duplication of efforts, and accelerating scientific discoveries could ensure that the fruits of open science processes and products are more accessible and equitably distributed…(More)”

Societal challenges and big qualitative data require a new era of methodological pragmatism


Blog by Alex Gillespie, Vlad Glăveanu, and Constance de Saint-Laurent: “The ‘classic’ methods we use today in psychology and the social sciences might seem relatively fixed, but they are the product of collective responses to concerns within a historical context. The 20th century methods of questionnaires and interviews made sense in a world where researchers did not have access to what people did or said, and even if they did, could not analyse it at scale. Questionnaires and interviews were suited to 20th century concerns (shaped by colonialism, capitalism, and the ideological battles of the Cold War) for understanding, classifying, and mapping opinions and beliefs.

However, what social scientists are faced with today is different due to the culmination of two historical trends. The first has to do with the nature of the problems we face. Inequalities, the climate emergency and current wars are compounded by a general rise in nationalism, populism, and especially post-truth discourses and ideologies. Nationalism and populism are not new, but the scale and sophistication of misinformation threatens to undermine collective responses to collective problems.

It is often said that we live in the age of ‘big data’, but what is less often said is that this is in fact the age of ‘big qualitative data’.

The second trend refers to technology and its accelerated development, especially the unprecedented accumulation of naturally occurring data (digital footprints) combined with increasingly powerful methods for data analysis (traditional and generative AI). It is often said that we live in the age of ‘big data’, but what is less often said is that this is in fact the age of ‘big qualitative data’. The biggest datasets are unstructured qualitative data (each minute adds 2.5 million Google text searches, 500 thousand photos on Snapchat, 500 hours of YouTube videos) and the most significant AI advances leverage this qualitative data and make it tractable for social research.

These two trends have been fuelling the rise in mixed methods research…(More)” (See also their new book ‘Pragmatism and Methodology’ (open access)

Why Do Universities Ignore Good Ideas?


Article by Jeffrey Funk: “Here is a recent assessment of 2023 Nobel Prize Winner Katalin Kariko:

“Eight current and former colleagues of Karikó told The Daily Pennsylvanian that — over the course of three decades — the university repeatedly shunned Karikó and her research, despite its groundbreaking potential.”

Another article claims that this occurred because she could not get the financial support to continue her research.

Why couldn’t she get financial support? “You’re more likely to get grants if you’re a tenured faculty member, but you’re more likely to get promoted to tenure if you get grants,” said Eric Feigl-Ding, an epidemiologist at the New England Complex Systems Institute and a former faculty member and researcher at Harvard Medical School. “There is a vicious cycle,” he says.

Interesting. So, the idea doesn’t matter. What matters to funding agencies is that you have previously obtained funding or are a tenured professor. Really? Are funding agencies this narrow-minded?

Mr. Feigl-Ding also said, “Universities also tend to look at how much a researcher publishes, or how widely covered by the media their work is, as opposed to how innovative the research is.” But why couldn’t Karikó get published?

Science magazine tells the story of her main paper with Drew Weismann in 2005. After being rejected by Nature within 24 hours: “It was similarly rejected by Science and by Cell, and the word incremental kept cropping up in the editorial staff comments.”

Incremental? There are more than two million papers published each year, and this research, for which Karikó and Weismann won a Nobel Prize, was deemed incremental? If it had been rejected for methods or for the contents being impossible to believe, I think most people could understand the rejection. But incremental?

Obviously, most of the two million papers published each year are really incremental. Yet one of the few papers that we can all agree was not incremental, gets rejected because it was deemed incremental.

Furthermore, this is happening in a system of science in which even Nature admits “disruptive science has declined,” few science-based technologies are being successfully commercialized, and Nature admits that it doesn’t understand why…(More)”.

A complexity science approach to law and governance


Introduction to a Special Issue by Pierpaolo Vivo, Daniel M. Katz and J. B. Ruhl: “The premise of this Special Issue is that legal systems are complex adaptive systems, and thus complexity science can be usefully applied to improve understanding of how legal systems operate, perform and change over time. The articles that follow take this proposition as a given and act on it using a variety of methods applied to a broad array of legal system attributes and contexts. Yet not too long ago some prominent legal scholars expressed scepticism that this field of study would produce more than broad generalizations, if even that. To orient readers unfamiliar with this field and its history, here we offer a brief background on how using complexity science to study legal systems has advanced from claims of ‘pseudoscience’ status to a widely adopted mainstream method. We then situate and summarize the articles.

The focus of complexity science is complex adaptive systems (CAS), systems ‘in which large networks of components with no central control and simple rules of operation give rise to complex collective behavior, sophisticated information processing and adaptation via learning or evolution’. It is important to distinguish CAS from systems that are merely complicated, such as a combustion engine, or complex but non-adaptive, such as a hurricane. A forest or coastal ecosystem, for example, is a complicated network of diverse physical and biological components, which, under no central rules of control, is highly adaptive over time…(More)”.

The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting


Article by Andy Tattersall: “…Understanding the often mysterious processes of how research is picked up and used across different sections of the media is therefore important. To do this we looked at a sample of research that included at least one author from the University of Sheffield that had been cited in either national or local media. We obtained the data from Altmetric.com to explore whether the news story included supporting information that linked readers to the research and those behind it. These were links to any of the authors, their institution, the journal or the research funder. We also investigated how much of this research was available via open access.

National news websites were more likely to include a link to the research paper underpinning the news story.

The contrasts between national and local samples were notable. National news websites were more likely to include a link to the research paper underpinning the news story. National research coverage stories were also more organic. They were more likely to be original texts written by journalists who are credited as authors. This is reflected in more idiosyncratic citation practices. Guardian writers, such as Henry Nicholls and George Monbiot, regularly provided a proper academic citation to the research at the end of their articles. This should be standard practice, but it does require those writing press releases to include formatted citations with a link as a basic first step. 

Local news coverage followed a different pattern, which is likely due to their use of news agencies to provide stories. Much local news coverage relies on copying and pasting subscription content provided by the UK’s national news agency, PA News. Anyone who has visited their local news website in recent years will know that they are full of pop-ups and hyperlinks to adverts and commercial websites. As a result of this business model, local news stories contain no or very few links to the research and those behind the work. Whether any of this practice and the lack of information stems from academic institution and publisher press releases is debatable. 

“Much local news coverage relies on copying and pasting subscription content provided by the UK’s national news agency, PA News.

Further, we found that local coverage of research is often syndicated across multiple news sites, belonging to a few publishers. Consequently if a syndication republishes the same information across their news platforms, it replicates bad practice. A solution to this is to include a readily formatted citation with a link, preferably to an open access version, at the foot of the story. This allows local media to continue linking to third party sites whilst providing an option to explore the actual research paper, especially if that paper is open access…(More)”.