The fight against fake-paper factories that churn out sham science


Holly Else & Richard Van Noorden at Nature: “When Laura Fisher noticed striking similarities between research papers submitted to RSC Advances, she grew suspicious. None of the papers had authors or institutions in common, but their charts and titles looked alarmingly similar, says Fisher, the executive editor at the journal. “I was determined to try to get to the bottom of what was going on.”

A year later, in January 2021, Fisher retracted 68 papers from the journal, and editors at two other Royal Society of Chemistry (RSC) titles retracted one each over similar suspicions; 15 are still under investigation. Fisher had found what seemed to be the products of paper mills: companies that churn out fake scientific manuscripts to order. All the papers came from authors at Chinese hospitals. The journals’ publisher, the RSC in London, announced in a statement that it had been the victim of what it believed to be “the systemic production of falsified research”.

What was surprising about this was not the paper-mill activity itself: research-integrity sleuths have repeatedly warned that some scientists buy papers from third-party firms to help their careers. Rather, it was extraordinary that a publisher had publicly announced something that journals generally keep quiet about. “We believe that it is a paper mill, so we want to be open and transparent,” Fisher says.

The RSC wasn’t alone, its statement added: “We are one of a number of publishers to have been affected by such activity.” Since last January, journals have retracted at least 370 papers that have been publicly linked to paper mills, an analysis by Nature has found, and many more retractions are expected to follow.

Much of this literature cleaning has come about because, last year, outside sleuths publicly flagged papers that they think came from paper mills owing to their suspiciously similar features. Collectively, the lists of flagged papers total more than 1,000 studies, the analysis shows. Editors are so concerned by the issue that last September, the Committee on Publication Ethics (COPE), a publisher-advisory body in London, held a forum dedicated to discussing “systematic manipulation of the publishing process via paper mills”. Their guest speaker was Elisabeth Bik, a research-integrity analyst in California known for her skill in spotting duplicated images in papers, and one of the sleuths who posts their concerns about paper mills online….(More)”.

Opinion Fetishism: Can we escape the reductio ad tweetum?


Essay by Alexander Stern: “Something once expressed, however absurd, fortuitous or wrong it may be, because it has been once said, so tyrannizes the sayer as his property that he can never have done with it.” 

So observes the German social theorist Theodor Adorno in his 1951 book Minima Moralia. Although he is reflecting on the transformations of individuality and interpersonal relations in the industrial society of the late 1940s, Adorno sounds almost as though he is discussing Twitter, particularly the way tweets are taken as immutable expressions of a person’s essential being. Thoughts tweeted in the distant past are exhumed to torment people who have risen to prominence. People engage in ritual apologies for innocuous tweets that offend overly delicate sensibilities. Some insufficiently prudent souls even end up losing jobs for tweets that are hardly controversial. 

While all of this seems to be very much of our time, one of the many unhappy products of our highly mediated lives, the provenance of Adorno’s observation suggests that the distance between what we say and who we are—between ideas and identity—has been shrinking for a long time. The consequence of that shrinkage is not just that it can dehumanize. It also distorts democratic discourse, turning it into a war of all against all. Without the distance between self and thought, self and utterance, we are unable to entertain, probe, or debate ideas. We are unable to change our minds or to persuade others. We are not even in a position to form our views in thoughtful, disinterested ways. But there may yet be a way out. Precisely by codifying and accelerating the collapse of the distinction between ideas and identity, Twitter might ironically be alerting us to the absurdity and shallowness of intellectual life practiced on its terms. 

How exactly did we come to this pass? The simple answer, for Adorno, was that utterances—and those who utter them—have taken on a commodity character, in Karl Marx’s sense of the term. Commercial products, Marx thought, began to evince a strange quality under industrial production. They no longer appeared to be the result of a social process mixing labor and material but took on a fetishized glow that hid the specifics of their production and endowed their mere materiality with a quasi-mystical sheen—the kind that makes teenagers covet Air Jordans, for example. The amplification of this fetish character is, indeed, the explicit aim of contemporary branding….(More)”.

The mysterious user editing a global open-source map in China’s favor


Article by Vittoria Elliott and Nilesh Christopher Late last year, Nick Doiron spotted an article in The New York Times, detailing how China had built a village along the contested border with neighboring Bhutan. Doiron is a mapping aficionado and longtime contributor to OpenStreetMap (OSM), an open-source mapping platform that relies on an army of unpaid volunteers, just as Wikipedia does. Governments, universities, humanitarian groups, and companies like Amazon, Grab, Baidu, and Facebook all use data from OSM, making it an important tool that underpins ride-hailing apps and other technologies used by millions of people.

After reading the article, Doiron went to add new details about the Chinese village to OSM, which he expected would be missing. But when he zoomed in on the area, he made a peculiar discovery: Someone else had already documented the settlement before it was reported in the Times, and they had included granular details that Doiron couldn’t find anywhere else.

“They mapped the outlines of the buildings,” Doiron said, labeling one as a kindergarten, one as a police station, and another as a radio station. Even if the mysterious person had bought a satellite image from a private company, “I don’t know how they could have had that specific kind of information,” Doiron said.

That wasn’t the only thing that struck Doiron as strange. The user had also made the changes under the name NM$L, Chinese slang for the insult “Your mom is dead,” and linked to a Chinese rap music label that shares the same name. An accompanying bio hinted at their motives: “Safeguarding national sovereignty, unity and territorial integrity is the common obligation of all Chinese people, including compatriots in Hong Kong, Macao and Taiwan,” it read.

“Most people on OpenStreetMap don’t even have anything in their profile,” said Doiron. “It’s not like a social media site.”

As he looked deeper, Doiron discovered that NM$L had made several other edits, many of them along China’s border and in contested territories. The account had added changes to the Spratly Islands, an archipelago that an international tribunal ruled in 2016 was not part of China’s possible territorial claims, though it has continued to develop in the area. The account also drew along the Line of Actual Control (LAC) that separates Indian and Chinese territory in the disputed Himalayan border region, which the two countries fought a war over in 1962.

What, Doiron wondered, is going on here? 

Anyone can contribute to OSM, which makes the site democratic and open, but also leaves it vulnerable to the politics and perspectives of its individual contributors. This wasn’t the first time Doiron had heard of a user making edits in a certain country’s favor. “I know there are pro-India accounts that have added things like military checkpoints from the India perspective,” he said….(More)”.

Collaboration technology has been invaluable during the pandemic


TechRepublic: “The pandemic forced the enterprise to quickly pivot from familiar business practices and develop ways to successfully function while keeping employees safe. A new report from Zoom, The Impact of Video Communications During COVID-19, was released Thursday.

“Video communications were suddenly our lifeline to society, enabling us to continue work and school in a digital environment,” said Brendan Ittelson, chief technology officer of Zoomon the company’s blog. “Any baby steps toward digital transformation suddenly had to become leaps and bounds, with people reimagining their entire day-to-day practically overnight.”

Zoom commissioned the Boston Consulting Group (BCG) to conduct a survey and economic analysis to evaluate the economic impact of remote work and video communications solutions during the pandemic. BCG also conducted a survey and economic analysis, with a focus on which industries pivoted business processes using video conferencing, resulting in business continuity and even growth during a time of significant economic turmoil.

Key findings

  • In the U.S., the ability to work remotely saved 2.28 million jobs up to three times as many employees worked remotely, with a nearly three times increase in the use of video conferencing solutions.
  • Of the businesses surveyed, the total time spent on video conferencing solutions increased by as much as five times the numbers pre-pandemic.
  • BCG’s COVID-19 employee sentiment survey from 2020 showed that 70% of managers are more open to flexible remote working models than they were before the pandemic.
  • Hybrid working models will be the norm soon. The businesses surveyed expect more than a third of employees to work remotely beyond the pandemic.
  • The U.K. saved 550,000 jobs because of remote capabilities; Germany saved 372,00 jobs and France saved 250,000….(More)”.

Using FOIA logs to develop news stories


Yilun Cheng at MuckRock: “In the fiscal year 2020, federal agencies received a total of 790,772 Freedom of Information Act (FOIA) requests. There are also tens of thousands of state and local agencies taking in and processing public record requests on a daily basis. Since most agencies keep a log of requests received, FOIA-minded reporters can find interesting story ideas by asking for and digging through the history of what other people are looking to obtain.

Some FOIA logs are posted on the websites of agencies that proactively release these records. Those that are not can be obtained through a FOIA request. There are a number of online resources that collect and store these documents, including MuckRockthe Black VaultGovernment Attic and FOIA Land.

Sorting through a FOIA log can be challenging since format differs from agency to agency. A more well-maintained log might include comprehensive information on the names of the requesters, the records being asked for, the dates of the requests’ receipt and the agency’s responses, as shown, for example, in a log released by the U.S. Department of Health and Human Services Agency.https://www.documentcloud.org/documents/20508483/annotations/2024702

But other departments –– the Cook County Department of Public Health, for instance –– might only send over a three-column spreadsheet with no descriptions of the nature of the requests.https://www.documentcloud.org/documents/20491259/annotations/2024703

As a result, learning how to negotiate with agencies and interpreting the content in their FOIA logs are crucial for journalists trying to understand the public record landscape. While some reporters only use FOIA logs to keep tabs on their competitors’ reporting interests, the potential of these documents goes far beyond this. Below are some tips for getting story inspiration from FOIA logs….(More)”.

What Data Can’t Do


Hannah Fry in The New Yorker: “Tony Blair was usually relaxed and charismatic in front of a crowd. But an encounter with a woman in the audience of a London television studio in April, 2005, left him visibly flustered. Blair, eight years into his tenure as Britain’s Prime Minister, had been on a mission to improve the National Health Service. The N.H.S. is a much loved, much mocked, and much neglected British institution, with all kinds of quirks and inefficiencies. At the time, it was notoriously difficult to get a doctor’s appointment within a reasonable period; ailing people were often told they’d have to wait weeks for the next available opening. Blair’s government, bustling with bright technocrats, decided to address this issue by setting a target: doctors would be given a financial incentive to see patients within forty-eight hours.

It seemed like a sensible plan. But audience members knew of a problem that Blair and his government did not. Live on national television, Diana Church calmly explained to the Prime Minister that her son’s doctor had asked to see him in a week’s time, and yet the clinic had refused to take any appointments more than forty-eight hours in advance. Otherwise, physicians would lose out on bonuses. If Church wanted her son to see the doctor in a week, she would have to wait until the day before, then call at 8 a.m. and stick it out on hold. Before the incentives had been established, doctors couldn’t give appointments soon enough; afterward, they wouldn’t give appointments late enough.

“Is this news to you?” the presenter asked.

“That is news to me,” Blair replied.

“Anybody else had this experience?” the presenter asked, turning to the audience.

Chaos descended. People started shouting, Blair started stammering, and a nation watched its leader come undone over a classic case of counting gone wrong.

Blair and his advisers are far from the first people to fall afoul of their own well-intentioned targets. Whenever you try to force the real world to do something that can be counted, unintended consequences abound. That’s the subject of two new books about data and statistics: “Counting: How We Use Numbers to Decide What Matters” (Liveright), by Deborah Stone, which warns of the risks of relying too heavily on numbers, and “The Data Detective” (Riverhead), by Tim Harford, which shows ways of avoiding the pitfalls of a world driven by data.

Both books come at a time when the phenomenal power of data has never been more evident. The covid-19 pandemic demonstrated just how vulnerable the world can be when you don’t have good statistics, and the Presidential election filled our newspapers with polls and projections, all meant to slake our thirst for insight. In a year of uncertainty, numbers have even come to serve as a source of comfort. Seduced by their seeming precision and objectivity, we can feel betrayed when the numbers fail to capture the unruliness of reality.

The particular mistake that Tony Blair and his policy mavens made is common enough to warrant its own adage: once a useful number becomes a measure of success, it ceases to be a useful number. This is known as Goodhart’s law, and it reminds us that the human world can move once you start to measure it….(More)”.

From Tech Critique to Ways of Living


Alan Jacobs at the New Atlantis: “Neil Postman was right. So what?… In the 1950s and 1960s, a series of thinkers, beginning with Jacques Ellul and Marshall McLuhan, began to describe the anatomy of our technological society. Then, starting in the 1970s, a generation emerged who articulated a detailed critique of that society. The critique produced by these figures I refer to in the singular because it shares core features, if not a common vocabulary. What Ivan Illich, Ursula Franklin, Albert Borgmann, and a few others have said about technology is powerful, incisive, and remarkably coherent. I am going to call the argument they share the Standard Critique of Technology, or SCT. The one problem with the SCT is that it has had no success in reversing, or even slowing, the momentum of our society’s move toward what one of their number, Neil Postman, called technopoly.

The basic argument of the SCT goes like this. We live in a technopoly, a society in which powerful technologies come to dominate the people they are supposed to serve, and reshape us in their image. These technologies, therefore, might be called prescriptive (to use Franklin’s term) or manipulatory (to use Illich’s). For example, social networks promise to forge connections — but they also encourage mob rule. Facial-recognition software helps to identify suspects — and to keep tabs on whole populations. Collectively, these technologies constitute the device paradigm (Borgmann), which in turn produces a culture of compliance (Franklin).

The proper response to this situation is not to shun technology itself, for human beings are intrinsically and necessarily users of tools. Rather, it is to find and use technologies that, instead of manipulating us, serve sound human ends and the focal practices (Borgmann) that embody those ends. A table becomes a center for family life; a musical instrument skillfully played enlivens those around it. Those healthier technologies might be referred to as holistic (Franklin) or convivial (Illich), because they fit within the human lifeworld and enhance our relations with one another. Our task, then, is to discern these tendencies or affordances of our technologies and, on both social and personal levels, choose the holistic, convivial ones.

The Standard Critique of Technology as thus described is cogent and correct. I have referred to it many times and applied it to many different situations. For instance, I have used the logic of the SCT to make a case for rejecting the “walled gardens” of the massive social media companies, and for replacing them with a cultivation of the “digital commons” of the open web.

But the number of people who are even open to following this logic is vanishingly small. For all its cogency, the SCT is utterly powerless to slow our technosocial momentum, much less to alter its direction. Since Postman and the rest made that critique, the social order has rushed ever faster toward a complete and uncritical embrace of the prescriptive, manipulatory technologies deceitfully presented to us as Liberation and Empowerment. So what next?…(More)”.

A Victory for Scientific Pragmatism


Essay by Arturo CasadevallMichael J. Joynerand Nigel Paneth:”…The convalescent plasma controversy highlights the need to better educate physicians on the knowledge problem in medicine: How do we know what we know, and how do we acquire new knowledge? The usual practice guidelines doctors rely on for the treatment of disease were not available for the treatment of Covid-19 early in the pandemic, since these are usually issued by professional societies only after definitive information is available from RCTs, a luxury we did not have. The convalescent plasma experience supports Devorah Goldman’s plea to consider all available information when making therapeutic decisions.

Fortunately, the availability of rapid communication through pre-print studies, social media, and online conferences have allowed physicians to learn quickly. The experience suggests the value of providing more instruction in medical schools, postgraduate education, and continuing medical education on how best to evaluate evidence — especially preliminary and seemingly contradictory evidence. Just as physicians learn to use clinical judgment in treating individual patients, they must learn how to weigh evidence in treating populations of patients. We also need greater nimbleness and more flexibility from regulators and practice-guideline groups in emergency situations such as pandemics. They should issue interim recommendations that synthesize the best available evidence, as the American Association of Blood Bankers has done for plasma, recognizing that these recommendations may change as new evidence accumulates. Similarly, we all need to make greater efforts to educate the public to understand that all knowledge in medicine and science is provisional, subject to change as new and better studies emerge. Updating and revising recommendations as knowledge advances is not a weakness but a foundational strength of good medicine….(More)”.

Hospitals Hide Pricing Data From Search Results


Tom McGintyAnna Wilde Mathews and Melanie Evans at the Wall Street Journal: “Hospitals that have published their previously confidential prices to comply with a new federal rule have also blocked that information from web searches with special coding embedded on their websites, according to a Wall Street Journal examination.

The information must be disclosed under a federal rule aimed at making the $1 trillion sector more consumer friendly. But hundreds of hospitals embedded code in their websites that prevented Alphabet Inc.’s Google and other search engines from displaying pages with the price lists, according to the Journal examination of more than 3,100 sites.

The code keeps pages from appearing in searches, such as those related to a hospital’s name and prices, computer-science experts said. The prices are often accessible other ways, such as through links that can require clicking through multiple layers of pages.

“It’s technically there, but good luck finding it,” said Chirag Shah, an associate professor at the University of Washington who studies human interactions with computers. “It’s one thing not to optimize your site for searchability, it’s another thing to tag it so it can’t be searched. It’s a clear indication of intentionality.”…(More)”.

Negligence, Not Politics, Drives Most Misinformation Sharing


John Timmer at Wired: “…a small international team of researchers… decided to take a look at how a group of US residents decided on which news to share. Their results suggest that some of the standard factors that people point to when explaining the tsunami of misinformation—inability to evaluate information and partisan biases—aren’t having as much influence as most of us think. Instead, a lot of the blame gets directed at people just not paying careful attention.

The researchers ran a number of fairly similar experiments to get at the details of misinformation sharing. This involved panels of US-based participants recruited either through Mechanical Turk or via a survey population that provided a more representative sample of the US. Each panel had several hundred to over 1,000 individuals, and the results were consistent across different experiments, so there was a degree of reproducibility to the data.

To do the experiments, the researchers gathered a set of headlines and lead sentences from news stories that had been shared on social media. The set was evenly mixed between headlines that were clearly true and clearly false, and each of these categories was split again between those headlines that favored Democrats and those that favored Republicans.

One thing that was clear is that people are generally capable of judging the accuracy of the headlines. There was a 56 percentage point gap between how often an accurate headline was rated as true and how often a false headline was. People aren’t perfect—they still got things wrong fairly often—but they’re clearly quite a bit better at this than they’re given credit for.

The second thing is that ideology doesn’t really seem to be a major factor in driving judgements on whether a headline was accurate. People were more likely to rate headlines that agreed with their politics, but the difference here was only 10 percentage points. That’s significant (both societally and statistically), but it’s certainly not a large enough gap to explain the flood of misinformation.

But when the same people were asked about whether they’d share these same stories, politics played a big role, and the truth receded. The difference in intention to share between true and false headlines was only 6 percentage points. Meanwhile the gap between whether a headline agreed with a person’s politics or not saw a 20 percentage point gap. Putting it in concrete terms, the authors look at the false headline “Over 500 ‘Migrant Caravaners’ Arrested With Suicide Vests.” Only 16 percent of conservatives in the survey population rated it as true. But over half of them were amenable to sharing it on social media….(More)”.