Paper by Nicholas Fabiano et al: “Systematic reviews are a cornerstone for synthesizing the available evidence on a given topic. They simultaneously allow for gaps in the literature to be identified and provide direction for future research. However, due to the ever-increasing volume and complexity of the available literature, traditional methods for conducting systematic reviews are less efficient and more time-consuming. Numerous artificial intelligence (AI) tools are being released with the potential to optimize efficiency in academic writing and assist with various stages of the systematic review process including developing and refining search strategies, screening titles and abstracts for inclusion or exclusion criteria, extracting essential data from studies and summarizing findings. Therefore, in this article we provide an overview of the currently available tools and how they can be incorporated into the systematic review process to improve efficiency and quality of research synthesis. We emphasize that authors must report all AI tools that have been used at each stage to ensure replicability as part of reporting in methods….(More)”.
Effects of Open Access. Literature study on empirical research 2010–2021
Paper by David Hopf, Sarah Dellmann, Christian Hauschke, and Marco Tullney: “Open access — the free availability of scholarly publications — intuitively offers many benefits. At the same time, some academics, university administrators, publishers, and political decision-makers express reservations. Many empirical studies on the effects of open access have been published in the last decade. This report provides an overview of the state of research from 2010 to 2021. The empirical results on the effects of open access help to determine the advantages and disadvantages of open access and serve as a knowledge base for academics, publishers, research funding and research performing institutions, and policy makers. This overview of current findings can inform decisions about open access and publishing strategies. In addition, this report identifies aspects of the impact of open access that are potentially highly relevant but have not yet been sufficiently studied…(More)”.
Artificial Intelligence Applications for Social Science Research
Report by Megan Stubbs-Richardson et al: “Our team developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in our database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualizations, or 3) research dissemination. In the database, we provide a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023. Supporting links were provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, we documented information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool. Finally, as we are a team of scientists who are also interested in studying social media data to understand social problems, we also documented when the AI tools were useful for text-based data, such as social media. This database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualizations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both. Our database thus offers some of the recently published tools for exploring the application of AI to social science research…(More)”
Societal interaction plans—A tool for enhancing societal engagement of strategic research in Finland
Paper by Kirsi Pulkkinen, Timo Aarrevaara, Mikko Rask, and Markku Mattila: “…we investigate the practices and capacities that define successful societal interaction of research groups with stakeholders in mutually beneficial processes. We studied the Finnish Strategic Research Council’s (SRC) first funded projects through a dynamic governance lens. The aim of the paper is to explore how the societal interaction was designed and commenced at the onset of the projects in order to understand the logic through which the consortia expected broad impacts to occur. The Finnish SRC introduced a societal interaction plan (SIP) approach, which requires research consortia to consider societal interaction alongside research activities in a way that exceeds conventional research plans. Hence, the first SRC projects’ SIPs and the implemented activities and working logics discussed in the interviews provide a window into exploring how active societal interaction reflects the call for dynamic, sustainable practices and new capabilities to better link research to societal development. We found that the capacities of dynamic governance were implemented by integrating societal interaction into research, in particular through a ‘drizzling’ approach. In these emerging practices SIP designs function as platforms for the formation of communities of experts, rather than traditional project management models or mere communication tools. The research groups utilized the benefits of pooling academic knowledge and skills with other types of expertise for mutual gain. They embraced the limits of expertise and reached out to societal partners to truly broker knowledge, and exchange and develop capacities and perspectives to solve grand societal challenges…(More)”.
Will we run out of data? Limits of LLM scaling based on human-generated data
Paper by Pablo Villalobos: We investigate the potential constraints on LLM scaling posed by the availability of public human-generated text data. We forecast the growing demand for training data based on current trends and estimate the total stock of public human text data. Our findings indicate that if current LLM development trends continue, models will be trained on datasets roughly equal in size to the available stock of public human text data between 2026 and 2032, or slightly earlier if models are overtrained. We explore how progress in language modeling can continue when human-generated text datasets cannot be scaled any further. We argue that synthetic data generation, transfer learning from data-rich domains, and data efficiency improvements might support further progress…(More)”.
Japan’s push to make all research open access is taking shape
Article by Dalmeet Singh Chawla: “The Japanese government is pushing ahead with a plan to make Japan’s publicly funded research output free to read. In June, the science ministry will assign funding to universities to build the infrastructure needed to make research papers free to read on a national scale. The move follows the ministry’s announcement in February that researchers who receive government funding will be required to make their papers freely available to read on the institutional repositories from April 2025.
The Japanese plan “is expected to enhance the long-term traceability of research information, facilitate secondary research and promote collaboration”, says Kazuki Ide, a health-sciences and public-policy scholar at Osaka University in Suita, Japan, who has written about open access in Japan.
The nation is one of the first Asian countries to make notable advances towards making more research open access (OA) and among the first countries in the world to forge a nationwide plan for OA.
The plan follows in the footsteps of the influential Plan S, introduced six years ago by a group of research funders in the United States and Europe known as cOAlition S, to accelerate the move to OA publishing. The United States also implemented an OA mandate in 2022 that requires all research funded by US taxpayers to be freely available from 2026…(More)”.
Behavioral Decision Analysis
Book edited by Florian M. Federspiel, Gilberto Montibeller, and Matthias Seifert: “This book lays out a foundation and taxonomy for Behavioral Decision Analysis, featuring representative work across various domains. Traditional research in the domain of Decision Analysis has focused on the design and application of logically consistent tools to support decision makers during the process of structuring problem complexity, modeling uncertainty, generating predictions, eliciting preferences, and, ultimately, making better decisions. Two commonly held assumptions are that the decision maker’s cognitive belief system is fully accessible and that this system can be understood and formalized by trained analysts. However, in past years, an active line of research has emerged studying instances in which such assumptions may not hold. This book unites this community under the common theme of Behavioral Decision Analysis. The taxonomy used in this book categorizes research based on task focus (prediction or decision) and behavioral level (individual or group). Two theoretical lenses that lie at the interface between (1) normative and descriptive research, and (2) normative and prescriptive research are introduced. The book then proceeds to highlight representative works across the two lenses focused on individual and group-level decision making. Featuring various methodologies and applications, the book serves as a reference for researchers, students, and professionals across different disciplines with a common interest in Behavioral Decision Analysis…(More)”.
Science in the age of AI
Report by the Royal Society: “The unprecedented speed and scale of progress with artificial intelligence (AI) in recent years suggests society may be living through an inflection point. With the growing availability of large datasets, new algorithmic techniques and increased computing power, AI is becoming an established tool used by researchers across scientific fields who seek novel solutions to age-old problems. Now more than ever, we need to understand the extent of the transformative impact of AI on science and what scientific communities need to do to fully harness its benefits.
This report, Science in the age of AI (PDF), explores how AI technologies, such as deep learning or large language models, are transforming the nature and methods of scientific inquiry. It also explores how notions of research integrity; research skills or research ethics are inevitably changing, and what the implications are for the future of science and scientists.
The report addresses the following questions:
- How are AI-driven technologies transforming the methods and nature of scientific research?
- What are the opportunities, limitations, and risks of these technologies for scientific research?
- How can relevant stakeholders (governments, universities, industry, research funders, etc) best support the development, adoption, and uses of AI-driven technologies in scientific research?
In answering these questions, the report integrates evidence from a range of sources, including research activities with more than 100 scientists and the advisement of an expert Working group, as well as a taxonomy of AI in science (PDF), a historical review (PDF) on the role of disruptive technologies in transforming science and society, and a patent landscape review (PDF) of artificial intelligence related inventions, which are available to download…(More)”
Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science
Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China.
While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.
Participatory mapping as a social digital tool
Blog by María de los Ángeles Briones: “…we will use 14 different examples from different continents and contexts to explore the goals and methods used for participatory mapping as a social digital tool. Despite looking very different and coming from a range of cultural backgrounds, there are a number of similarities in these different case studies.
Although the examples have different goals, we have identified four main focus areas: activism, conviviality, networking and urban planning. More localised mapping projects often had a focus on activism. We also see from that maps are not isolated tools, they are complementary to work with other communication tools and platforms.
The internet has transformed communications and networks across the globe – allowing for interconnectivity and scalability of information among and between different groups of society. This allows voices, regardless of their location, to be amplified and listened to by many other voices achieving collective goals. This has great potential in a global world where it is evident that top-down initiatives are not enough to handle many of the social needs that local people experience. However, though the internet makes sharing and collaborating between people easier, offline maps are still valuable, as shown in some of our examples.
The similarity between the different maps that we explored is that they are social digital tools. They are social because they are related to projects that are seeking to solve social needs; and they are digital because they are based on digital platforms that permit them to be alive, spread, shared and used. These characteristics also refer to their function and design.
A tool can be defined as a device or implement, especially one held in the hand, used to carry out a particular function. So when we speak of a tool there are four things involved: an actor, an object, a function and a purpose. Just as a hammer is a tool that a carpenter (actor) use to hammer nails (function) and thus build something (purpose) we understand that social tools are used by one or more people for taking actions where the final objective is to meet a social need…(More)”.