Why Do Universities Ignore Good Ideas?


Article by Jeffrey Funk: “Here is a recent assessment of 2023 Nobel Prize Winner Katalin Kariko:

“Eight current and former colleagues of Karikó told The Daily Pennsylvanian that — over the course of three decades — the university repeatedly shunned Karikó and her research, despite its groundbreaking potential.”

Another article claims that this occurred because she could not get the financial support to continue her research.

Why couldn’t she get financial support? “You’re more likely to get grants if you’re a tenured faculty member, but you’re more likely to get promoted to tenure if you get grants,” said Eric Feigl-Ding, an epidemiologist at the New England Complex Systems Institute and a former faculty member and researcher at Harvard Medical School. “There is a vicious cycle,” he says.

Interesting. So, the idea doesn’t matter. What matters to funding agencies is that you have previously obtained funding or are a tenured professor. Really? Are funding agencies this narrow-minded?

Mr. Feigl-Ding also said, “Universities also tend to look at how much a researcher publishes, or how widely covered by the media their work is, as opposed to how innovative the research is.” But why couldn’t Karikó get published?

Science magazine tells the story of her main paper with Drew Weismann in 2005. After being rejected by Nature within 24 hours: “It was similarly rejected by Science and by Cell, and the word incremental kept cropping up in the editorial staff comments.”

Incremental? There are more than two million papers published each year, and this research, for which Karikó and Weismann won a Nobel Prize, was deemed incremental? If it had been rejected for methods or for the contents being impossible to believe, I think most people could understand the rejection. But incremental?

Obviously, most of the two million papers published each year are really incremental. Yet one of the few papers that we can all agree was not incremental, gets rejected because it was deemed incremental.

Furthermore, this is happening in a system of science in which even Nature admits “disruptive science has declined,” few science-based technologies are being successfully commercialized, and Nature admits that it doesn’t understand why…(More)”.

A complexity science approach to law and governance


Introduction to a Special Issue by Pierpaolo Vivo, Daniel M. Katz and J. B. Ruhl: “The premise of this Special Issue is that legal systems are complex adaptive systems, and thus complexity science can be usefully applied to improve understanding of how legal systems operate, perform and change over time. The articles that follow take this proposition as a given and act on it using a variety of methods applied to a broad array of legal system attributes and contexts. Yet not too long ago some prominent legal scholars expressed scepticism that this field of study would produce more than broad generalizations, if even that. To orient readers unfamiliar with this field and its history, here we offer a brief background on how using complexity science to study legal systems has advanced from claims of ‘pseudoscience’ status to a widely adopted mainstream method. We then situate and summarize the articles.

The focus of complexity science is complex adaptive systems (CAS), systems ‘in which large networks of components with no central control and simple rules of operation give rise to complex collective behavior, sophisticated information processing and adaptation via learning or evolution’. It is important to distinguish CAS from systems that are merely complicated, such as a combustion engine, or complex but non-adaptive, such as a hurricane. A forest or coastal ecosystem, for example, is a complicated network of diverse physical and biological components, which, under no central rules of control, is highly adaptive over time…(More)”.

The Importance of Using Proper Research Citations to Encourage Trustworthy News Reporting


Article by Andy Tattersall: “…Understanding the often mysterious processes of how research is picked up and used across different sections of the media is therefore important. To do this we looked at a sample of research that included at least one author from the University of Sheffield that had been cited in either national or local media. We obtained the data from Altmetric.com to explore whether the news story included supporting information that linked readers to the research and those behind it. These were links to any of the authors, their institution, the journal or the research funder. We also investigated how much of this research was available via open access.

National news websites were more likely to include a link to the research paper underpinning the news story.

The contrasts between national and local samples were notable. National news websites were more likely to include a link to the research paper underpinning the news story. National research coverage stories were also more organic. They were more likely to be original texts written by journalists who are credited as authors. This is reflected in more idiosyncratic citation practices. Guardian writers, such as Henry Nicholls and George Monbiot, regularly provided a proper academic citation to the research at the end of their articles. This should be standard practice, but it does require those writing press releases to include formatted citations with a link as a basic first step. 

Local news coverage followed a different pattern, which is likely due to their use of news agencies to provide stories. Much local news coverage relies on copying and pasting subscription content provided by the UK’s national news agency, PA News. Anyone who has visited their local news website in recent years will know that they are full of pop-ups and hyperlinks to adverts and commercial websites. As a result of this business model, local news stories contain no or very few links to the research and those behind the work. Whether any of this practice and the lack of information stems from academic institution and publisher press releases is debatable. 

“Much local news coverage relies on copying and pasting subscription content provided by the UK’s national news agency, PA News.

Further, we found that local coverage of research is often syndicated across multiple news sites, belonging to a few publishers. Consequently if a syndication republishes the same information across their news platforms, it replicates bad practice. A solution to this is to include a readily formatted citation with a link, preferably to an open access version, at the foot of the story. This allows local media to continue linking to third party sites whilst providing an option to explore the actual research paper, especially if that paper is open access…(More)”.

Research Project Management and Leadership


Book by P. Alison Paprica: “The project management approaches, which are used by millions of people internationally, are often too detailed or constraining to be applied to research. In this handbook, project management expert P. Alison Paprica presents guidance specifically developed to help with the planning, management, and leadership of research.

Research Project Management and Leadership provides simplified versions of globally utilized project management tools, such as the work breakdown structure to visualize scope, and offers guidance on processes, including a five-step process to identify and respond to risks. The complementary leadership guidance in the handbook is presented in the form of interview write-ups with 19 Canadian and international research leaders, each of whom describes a situation where leadership skills were important, how they responded, and what they learned. The accessible language and practical guidance in the handbook make it a valuable resource for everyone from principal investigators leading multimillion-dollar projects to graduate students planning their thesis research. The book aims to help readers understand which management and leadership tools, processes, and practices are helpful in different circumstances, and how to implement them in research settings…(More)”.

How tracking animal movement may save the planet


Article by Matthew Ponsford: “Researchers have been dreaming of an Internet of Animals. They’re getting closer to monitoring 100,000 creatures—and revealing hidden facets of our shared world….There was something strange about the way the sharks were moving between the islands of the Bahamas.

Tiger sharks tend to hug the shoreline, explains marine biologist Austin Gallagher, but when he began tagging the 1,000-pound animals with satellite transmitters in 2016, he discovered that these predators turned away from it, toward two ancient underwater hills made of sand and coral fragments that stretch out 300 miles toward Cuba. They were spending a lot of time “crisscrossing, making highly tortuous, convoluted movements” to be near them, Gallagher says. 

It wasn’t immediately clear what attracted sharks to the area: while satellite images clearly showed the subsea terrain, they didn’t pick up anything out of the ordinary. It was only when Gallagher and his colleagues attached 360-degree cameras to the animals that they were able to confirm what they were so drawn to: vast, previously unseen seagrass meadows—a biodiverse habitat that offered a smorgasbord of prey.   

The discovery did more than solve a minor mystery of animal behavior. Using the data they gathered from the sharks, the researchers were able to map an expanse of seagrass stretching across 93,000 square kilometers of Caribbean seabed—extending the total known global seagrass coverage by more than 40%, according to a study Gallagher’s team published in 2022. This revelation could have huge implications for efforts to protect threatened marine ecosystems—seagrass meadows are a nursery for one-fifth of key fish stocks and habitats for endangered marine species—and also for all of us above the waves, as seagrasses can capture carbon up to 35 times faster than tropical rainforests. 

Animals have long been able to offer unique insights about the natural world around us, acting as organic sensors picking up phenomena that remain invisible to humans. More than 100 years ago, leeches signaled storms ahead by slithering out of the water; canaries warned of looming catastrophe in coal mines until the 1980s; and mollusks that close when exposed to toxic substances are still used to trigger alarms in municipal water systems in Minneapolis and Poland…(More)”.

Language Machinery


Essay by Richard Hughes Gibson: “… current debates about writing machines are not as fresh as they seem. As is quietly acknowledged in the footnotes of scientific papers, much of the intellectual infrastructure of today’s advances was laid decades ago. In the 1940s, the mathematician Claude Shannon demonstrated that language use could be both described by statistics and imitated with statistics, whether those statistics were in human heads or a machine’s memory. Shannon, in other words, was the first statistical language modeler, which makes ChatGPT and its ilk his distant brainchildren. Shannon never tried to build such a machine, but some astute early readers of his work recognized that computers were primed to translate his paper-and-ink experiments into a powerful new medium. In writings now discussed largely in niche scholarly and computing circles, these readers imagined—and even made preliminary sketches of—machines that would translate Shannon’s proposals into reality. These readers likewise raised questions about the meaning of such machines’ outputs and wondered what the machines revealed about our capacity to write.

The current barrage of commentary has largely neglected this backstory, and our discussions suffer for forgetting that issues that appear novel to us belong to the mid-twentieth century. Shannon and his first readers were the original residents of the headspace in which so many of us now find ourselves. Their ambitions and insights have left traces on our discourse, just as their silences and uncertainties haunt our exchanges. If writing machines constitute a “philosophical event” or a “prompt for philosophizing,” then I submit that we are already living in the event’s aftermath, which is to say, in Shannon’s aftermath. Amid the rampant speculation about a future dominated by writing machines, I propose that we turn in the other direction to listen to field reports from some of the first people to consider what it meant to read and write in Shannon’s world…(More)”.

Toward a 21st Century National Data Infrastructure: Managing Privacy and Confidentiality Risks with Blended Data


Report by the National Academies of Sciences, Engineering, and Medicine: “Protecting privacy and ensuring confidentiality in data is a critical component of modernizing our national data infrastructure. The use of blended data – combining previously collected data sources – presents new considerations for responsible data stewardship. Toward a 21st Century National Data Infrastructure: Managing Privacy and Confidentiality Risks with Blended Data provides a framework for managing disclosure risks that accounts for the unique attributes of blended data and poses a series of questions to guide considered decision-making.

Technical approaches to manage disclosure risk have advanced. Recent federal legislation, regulation and guidance has described broadly the roles and responsibilities for stewardship of blended data. The report, drawing from the panel review of both technical and policy approaches, addresses these emerging opportunities and the new challenges and responsibilities they present. The report underscores that trade-offs in disclosure risks, disclosure harms, and data usefulness are unavoidable and are central considerations when planning data-release strategies, particularly for blended data…(More)”.

Enabling Data-Driven Innovation : Learning from Korea’s Data Policies and Practices for Harnessing AI 


Report by the World Bank: “Over the past few decades, the Republic of Korea has consciously undertaken initiatives to transform its economy into a competitive, data-driven system. The primary objectives of this transition were to stimulate economic growth and job creation, enhance the nation’s capacity to withstand adversities such as the aftermath of COVID-19, and position it favorably to capitalize on emerging technologies, particularly artificial intelligence (AI). The Korean government has endeavored to accomplish these objectives through establishing a dependable digital data infrastructure and a comprehensive set of national data policies. This policy note aims to present a comprehensive synopsis of Korea’s extensive efforts to establish a robust digital data infrastructure and utilize data as a key driver for innovation and economic growth. The note additionally addresses the fundamental elements required to realize these benefits of data, including data policies, data governance, and data infrastructure. Furthermore, the note highlights some key results of Korea’s data policies, including the expansion of public data opening, the development of big data platforms, and the growth of the AI Hub. It also mentions the characteristics and success factors of Korea’s data policy, such as government support and the reorganization of institutional infrastructures. However, it acknowledges that there are still challenges to overcome, such as in data collection and utilization as well as transitioning from a government-led to a market-friendly data policy. The note concludes by providing developing countries and emerging economies with specific insights derived from Korea’s forward-thinking policy making that can assist them in harnessing the potential and benefits of data…(More)”.

Why Machines Learn: The Elegant Maths Behind Modern AI


Book by Anil Ananthaswamy: “Machine-learning systems are making life-altering decisions for us: approving mortgage loans, determining whether a tumour is cancerous, or deciding whether someone gets bail. They now influence discoveries in chemistry, biology and physics – the study of genomes, extra-solar planets, even the intricacies of quantum systems.

We are living through a revolution in artificial intelligence that is not slowing down. This major shift is based on simple mathematics, some of which goes back centuries: linear algebra and calculus, the stuff of eighteenth-century mathematics. Indeed by the mid-1850s, a lot of the groundwork was all done. It took the development of computer science and the kindling of 1990s computer chips designed for video games to ignite the explosion of AI that we see all around us today. In this enlightening book, Anil Ananthaswamy explains the fundamental maths behind AI, which suggests that the basics of natural and artificial intelligence might follow the same mathematical rules…(More)”.

Do disappearing data repositories pose a threat to open science and the scholarly record?


Article by Dorothea Strecker, Heinz Pampel, Rouven Schabinger and Nina Leonie Weisweiler: “Research data repositories, such as Zenodo or the UK Data Archive, are specialised information infrastructures that focus on the curation and dissemination of research data. One of repositories’ main tasks is maintaining their collections long-term, see for example the TRUST Principles, or the requirements of the certification organization CoreTrustSeal. Long-term preservation is also a prerequisite for several data practices that are getting increasing attention, such as data reuse and data citation.

For data to remain usable, the infrastructures that host them also have to be kept operational. However, the long-term operation of research data repositories is challenging, and sometimes, for varying reasons and despite best efforts, they are shut down….

In a recent study we therefore set out to take an infrastructure perspective on the long-term preservation of research data by investigating repositories across disciplines and types that were shut down. We also tried to estimate the impact of repository shutdown on data availability…

We found that repository shutdown was not rare: 6.2% of all repositories listed in re3data were shut down. Since the launch of the registry in 2012, at least one repository has been shut down each year (see Fig.1). The median age of a repository when shutting down was 12 years…(More)”.