AI-Ready FAIR Data: Accelerating Science through Responsible AI and Data Stewardship


Article by Sean Hill: “Imagine a future where scientific discovery is unbound by the limitations of data accessibility and interoperability. In this future, researchers across all disciplines — from biology and chemistry to astronomy and social sciences — can seamlessly access, integrate, and analyze vast datasets with the assistance of advanced artificial intelligence (AI). This world is one where AI-ready data empowers scientists to unravel complex problems at unprecedented speeds, leading to breakthroughs in medicine, environmental conservation, technology, and more. The vision of a truly FAIR (Findable, Accessible, Interoperable, Reusable) and AI-ready data ecosystem, underpinned by Responsible AI (RAI) practices and the pivotal role of data stewards, promises to revolutionize the way science is conducted, fostering an era of rapid innovation and global collaboration…(More)”.

The societal impact of Open Science: a scoping review


Report by Nicki Lisa Cole, Eva Kormann, Thomas Klebel, Simon Apartis and Tony Ross-Hellauer: “Open Science (OS) aims, in part, to drive greater societal impact of academic research. Government, funder and institutional policies state that it should further democratize research and increase learning and awareness, evidence-based policy-making, the relevance of research to society’s problems, and public trust in research. Yet, measuring the societal impact of OS has proven challenging and synthesized evidence of it is lacking. This study fills this gap by systematically scoping the existing evidence of societal impact driven by OS and its various aspects, including Citizen Science (CS), Open Access (OA), Open/FAIR Data (OFD), Open Code/Software and others. Using the PRISMA Extension for Scoping Reviews and searches conducted in Web of Science, Scopus and relevant grey literature, we identified 196 studies that contain evidence of societal impact. The majority concern CS, with some focused on OA, and only a few addressing other aspects. Key areas of impact found are education and awareness, climate and environment, and social engagement. We found no literature documenting evidence of the societal impact of OFD and limited evidence of societal impact in terms of policy, health, and trust in academic research. Our findings demonstrate a critical need for additional evidence and suggest practical and policy implications…(More)”.

Real Chaos, Today! Are Randomized Controlled Trials a good way to do economics?


Article by Maia Mindel: “A few weeks back, there was much social media drama about this a paper titled: “Social Media and Job Market Success: A Field Experiment on Twitter” (2024) by Jingyi Qiu, Yan Chen, Alain Cohn, and Alvin Roth (recipient of the 2012 Nobel Prize in Economics). The study posted job market papers by economics PhDs, and then assigned prominent economists (who had volunteered) to randomly promote half of them on their profiles(more detail on this paper in a bit).

The “drama” in question was generally: “it is immoral to throw dice around on the most important aspect of a young economist’s career”, versus “no it’s not”. This, of course, awakened interest in a broader subject: Randomized Controlled Trials, or RCTs.

R.C.T. T.O. G.O.

Let’s go back to the 1600s – bloodletting was a common way to cure diseases. Did it work? Well, doctor Joan Baptista van Helmont had an idea: randomly divvy up a few hundred invalids into two groups, one of which got bloodletting applied, and another one that didn’t.

While it’s not clear this experiment ever happened, it sets up the basic principle of the randomized control trial: the idea here is that, to study the effects of a treatment, (in a medical context, a medicine; in an economics context, a policy), a sample group is divided between two: the control group, which does not receive any treatment, and the treatment group, which does. The modern randomized controlled (or control) trial has three “legs”: it’s randomized because who’s in each group gets chosen at random, it’s controlled because there’s a group that doesn’t get the treatment to serve as a counterfactual, and it’s a trial because you’re not developing “at scale” just yet.

Why could it be important to randomly select people for economic studies? Well, you want the only difference, on average, between the two groups to be whether or not they get the treatment. Consider military service: it’s regularly trotted out that drafting kids would reduce crime rates. Is this true? Well, the average person who is exempted from the draft could be, systematically, different than the average person who isn’t – for example, people who volunteer could be from wealthier families who are more patriotic, or poorer families who need certain benefits; or they could have physical disabilities that impede their labor market participation, or wealthier university students who get a deferral. But because many countries use lotteries to allocate draftees versus non draftees, you can get a group of people who are randomly assigned to the draft, and who on average should be similar enough to each other. One study in particular, about Argentina’s mandatory military service in pretty much all of the 20th century, finds that being conscripted raises the crime rate relative to people who didn’t get drafted through the lottery. This doesn’t mean that soldiers have higher crime rates than non soldiers, because of selection issues – but it does provide pretty good evidence that getting drafted is not good for your non-criminal prospects…(More)”.

The Essential Principle for Appropriate Data Policy of Citizen Science Projects


Chapter by Takeshi Osawa: “Citizen science is one of new paradigms of science. This concept features various project forms, participants, and motivations and implies the need for attention to ethical issues for every participant, which frequently includes nonacademics. In this chapter, I address ethical issues associated with citizen science projects that focus on the data treatment rule and demonstrate a concept on appropriate data policy for these projects. First, I demonstrate that citizen science projects tend to include different types of collaboration, which may lead to certain conflicts among participants in terms of data sharing. Second, I propose an idea that could integrate different types of collaboration according to the theory transcend. Third, I take a case of a citizen science project through which transcend occurred and elucidate the difference between ordinal research and citizen science projects, specifically in terms of the goals of these projects and the goals and motivations of participants, which may change. Finally, I proposed one conceptual idea on how the principal investigator (PI) of a citizen science project can establish data policy after assessing the rights of participants. The basic idea is the division and organization of the data policy in a hierarchy for the project and for the participants. Data policy is one of the important items for establishing the appropriate methods for citizen science as new style of science. As such, practice and framing related to data policy must be carefully monitored and reflected on…(More)”.

Embracing the Social in Social Science


Article by Jay Lloyd: “In a world where science is inextricably intermixed with society, the social sciences are essential to building trust in the scientific enterprise.

To begin thinking about why all the sciences should embrace the social in social science, I would like to start with cupcakes.

In my research, context is a recurring theme, so let me give you some context for cupcakes as metaphor. A few months ago, when I was asked to respond to an article in this magazine, I wrote: “In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential. Social science is not the egg, the flour, or the sugar. Sprinkles are neither in the batter, nor do they see the oven. Sprinkles are a late addition. No matter the stylistic or aesthetic impact, they never alter the substance of the ‘cake’ in the cupcake.”

In writing these sentences, I was, and still am, hopeful that all kinds of future scientific research will make social science a key component of the scientific “batter” and bake social scientific knowledge, skill, and expertise into twenty-first-century scientific “cupcakes.”

But there are tensions and power differentials in the ways interdisciplinary science can be done. Most importantly, the formation of questions itself is a site of power. The questions we as a society ask science to address both reflect and create the values and power dynamics of social systems, whether the scientific disciplines recognize this influence or not. And some of those knowledge systems do not embrace the importance of insights from the social sciences because many institutions of science work hard to insulate the practice of science from the contingencies of society.

Moving forward, how do we, as researchers, develop questions that not only welcome intellectual variety within the sciences but also embrace the diversity represented in societies? As science continues to more powerfully blend, overlap, and intermix with society, embracing what social science can bring to the entire scientific enterprise is necessary. In order to accomplish these important goals, social concerns must be a key ingredient of the whole cupcake—not an afterthought, or decoration, but among the first thoughts…(More)”

Multi-disciplinary Perspectives on Citizen Science—Synthesizing Five Paradigms of Citizen Involvement


Paper by Susanne Beck, Dilek Fraisl, Marion Poetz and Henry Sauermann: “Research on Open Innovation in Science (OIS) investigates how open and collaborative practices influence the scientific and societal impact of research. Since 2019, the OIS Research Conference has brought together scholars and practitioners from diverse backgrounds to discuss OIS research and case examples. In this meeting report, we describe four session formats that have allowed our multi-disciplinary community to have productive discussions around opportunities and challenges related to citizen involvement in research. However, these sessions also highlight the need for a better understanding of the underlying rationales of citizen involvement in an increasingly diverse project landscape. Building on the discussions at the 2023 and prior editions of the conference, we outline a conceptual framework of five crowd paradigms and present an associated tool that can aid in understanding how citizen involvement in particular projects can help advance science. We illustrate this tool using cases presented at the 2023 conference, and discuss how it can facilitate discussions at future conferences as well as guide future research and practice in citizen science…(More)”.

Handbook on Public Policy and Artificial Intelligence


Book edited by Regine Paul, Emma Carmel and Jennifer Cobbe: “…explores the relationship between public policy and artificial intelligence (AI) technologies across a broad range of geographical, technical, political and policy contexts. It contributes to critical AI studies, focusing on the intersection of the norms, discourses, policies, practices and regulation that shape AI in the public sector.

Expert authors in the field discuss the creation and use of AI technologies, and how public authorities respond to their development, by bringing together emerging scholarly debates about AI technologies with longer-standing insights on public administration, policy, regulation and governance. Contributions in the Handbook mobilize diverse perspectives to critically examine techno-solutionist approaches to public policy and AI, dissect the politico-economic interests underlying AI promotion and analyse implications for sustainable development, fairness and equality. Ultimately, this Handbook questions whether regulatory concepts such as ethical, trustworthy or accountable AI safeguard a democratic future or contribute to a problematic de-politicization of the public sector…(More)”.

 

How to optimize the systematic review process using AI tools


Paper by Nicholas Fabiano et al: “Systematic reviews are a cornerstone for synthesizing the available evidence on a given topic. They simultaneously allow for gaps in the literature to be identified and provide direction for future research. However, due to the ever-increasing volume and complexity of the available literature, traditional methods for conducting systematic reviews are less efficient and more time-consuming. Numerous artificial intelligence (AI) tools are being released with the potential to optimize efficiency in academic writing and assist with various stages of the systematic review process including developing and refining search strategies, screening titles and abstracts for inclusion or exclusion criteria, extracting essential data from studies and summarizing findings. Therefore, in this article we provide an overview of the currently available tools and how they can be incorporated into the systematic review process to improve efficiency and quality of research synthesis. We emphasize that authors must report all AI tools that have been used at each stage to ensure replicability as part of reporting in methods….(More)”.

Effects of Open Access. Literature study on empirical research 2010–2021


Paper by David Hopf, Sarah Dellmann, Christian Hauschke, and Marco Tullney: “Open access — the free availability of scholarly publications — intuitively offers many benefits. At the same time, some academics, university administrators, publishers, and political decision-makers express reservations. Many empirical studies on the effects of open access have been published in the last decade. This report provides an overview of the state of research from 2010 to 2021. The empirical results on the effects of open access help to determine the advantages and disadvantages of open access and serve as a knowledge base for academics, publishers, research funding and research performing institutions, and policy makers. This overview of current findings can inform decisions about open access and publishing strategies. In addition, this report identifies aspects of the impact of open access that are potentially highly relevant but have not yet been sufficiently studied…(More)”.

Artificial Intelligence Applications for Social Science Research


Report by Megan Stubbs-Richardson et al: “Our team developed a database of 250 Artificial Intelligence (AI) applications useful for social science research. To be included in our database, the AI tool had to be useful for: 1) literature reviews, summaries, or writing, 2) data collection, analysis, or visualizations, or 3) research dissemination. In the database, we provide a name, description, and links to each of the AI tools that were current at the time of publication on September 29, 2023. Supporting links were provided when an AI tool was found using other databases. To help users evaluate the potential usefulness of each tool, we documented information about costs, log-in requirements, and whether plug-ins or browser extensions are available for each tool. Finally, as we are a team of scientists who are also interested in studying social media data to understand social problems, we also documented when the AI tools were useful for text-based data, such as social media. This database includes 132 AI tools that may have use for literature reviews or writing; 146 tools that may have use for data collection, analyses, or visualizations; and 108 that may be used for dissemination efforts. While 170 of the AI tools within this database can be used for general research purposes, 18 are specific to social media data analyses, and 62 can be applied to both. Our database thus offers some of the recently published tools for exploring the application of AI to social science research…(More)”