Gen AI: too much spend, too little benefit?


Article by Jason Koebler: “Investment giant Goldman Sachs published a research paper about the economic viability of generative AI which notes that there is “little to show for” the huge amount of spending on generative AI infrastructure and questions “whether this large spend will ever pay off in terms of AI benefits and returns.” 

The paper, called “Gen AI: too much spend, too little benefit?” is based on a series of interviews with Goldman Sachs economists and researchers, MIT professor Daron Acemoglu, and infrastructure experts. The paper ultimately questions whether generative AI will ever become the transformative technology that Silicon Valley and large portions of the stock market are currently betting on, but says investors may continue to get rich anyway. “Despite these concerns and constraints, we still see room for the AI theme to run, either because AI starts to deliver on its promise, or because bubbles take a long time to burst,” the paper notes. 

Goldman Sachs researchers also say that AI optimism is driving large growth in stocks like Nvidia and other S&P 500 companies (the largest companies in the stock market), but say that the stock price gains we’ve seen are based on the assumption that generative AI is going to lead to higher productivity (which necessarily means automation, layoffs, lower labor costs, and higher efficiency). These stock gains are already baked in, Goldman Sachs argues in the paper: “Although the productivity pick-up that AI promises could benefit equities via higher profit growth, we find that stocks often anticipate higher productivity growth before it materializes, raising the risk of overpaying. And using our new long-term return forecasting framework, we find that a very favorable AI scenario may be required for the S&P 500 to deliver above-average returns in the coming decade.”…(More)

Doing science backwards


Article by Stuart Ritchie: “…Usually, the process of publishing such a study would look like this: you run the study; you write it up as a paper; you submit it to a journal; the journal gets some other scientists to peer-review it; it gets published – or if it doesn’t, you either discard it, or send it off to a different journal and the whole process starts again.

That’s standard operating procedure. But it shouldn’t be. Think about the job of the peer-reviewer: when they start their work, they’re handed a full-fledged paper, reporting on a study and a statistical analysis that happened at some point in the past. It’s all now done and, if not fully dusted, then in a pretty final-looking form.

What can the reviewer do? They can check the analysis makes sense, sure; they can recommend new analyses are done; they can even, in extreme cases, make the original authors go off and collect some entirely new data in a further study – maybe the data the authors originally presented just aren’t convincing or don’t represent a proper test of the hypothesis.

Ronald Fisher described the study-first, review-later process in 1938:

To consult the statistician [or, in our case, peer-reviewer] after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.

Clearly this isn’t the optimal, most efficient way to do science. Why don’t we review the statistics and design of a study right at the beginning of the process, rather than at the end?

This is where Registered Reports come in. They’re a new (well, new-ish) way of publishing papers where, before you go to the lab, or wherever you’re collecting data, you write down your plan for your study and send it off for peer-review. The reviewers can then give you genuinely constructive criticism – you can literally construct your experiment differently depending on their suggestions. You build consensus—between you, the reviewers, and the journal editor—on the method of the study. And then, once everyone agrees on what a good study of this question would look like, you go off and do it. The key part is that, at this point, the journal agrees to publish your study, regardless of what the results might eventually look like…(More)”.

Enhancing human mobility research with open and standardized datasets


Article by Takahiro Yabe et al: “The proliferation of large-scale, passively collected location data from mobile devices has enabled researchers to gain valuable insights into various societal phenomena. In particular, research into the science of human mobility has become increasingly critical thanks to its interdisciplinary effects in various fields, including urban planning, transportation engineering, public health, disaster management, and economic analysis. Researchers in the computational social science, complex systems, and behavioral science communities have used such granular mobility data to uncover universal laws and theories governing individual and collective human behavior. Moreover, computer science researchers have focused on developing computational and machine learning models capable of predicting complex behavior patterns in urban environments. Prominent papers include pattern-based and deep learning approaches to next-location prediction and physics-inspired approaches to flow prediction and generation.

Regardless of the research problem of interest, human mobility datasets often come with substantial limitations. Existing publicly available datasets are often small, limited to specific transport modes, or geographically restricted, owing to the lack of open-source and large-scale human mobility datasets caused by privacy concerns…(More)”.

AI-Ready FAIR Data: Accelerating Science through Responsible AI and Data Stewardship


Article by Sean Hill: “Imagine a future where scientific discovery is unbound by the limitations of data accessibility and interoperability. In this future, researchers across all disciplines — from biology and chemistry to astronomy and social sciences — can seamlessly access, integrate, and analyze vast datasets with the assistance of advanced artificial intelligence (AI). This world is one where AI-ready data empowers scientists to unravel complex problems at unprecedented speeds, leading to breakthroughs in medicine, environmental conservation, technology, and more. The vision of a truly FAIR (Findable, Accessible, Interoperable, Reusable) and AI-ready data ecosystem, underpinned by Responsible AI (RAI) practices and the pivotal role of data stewards, promises to revolutionize the way science is conducted, fostering an era of rapid innovation and global collaboration…(More)”.

The societal impact of Open Science: a scoping review


Report by Nicki Lisa Cole, Eva Kormann, Thomas Klebel, Simon Apartis and Tony Ross-Hellauer: “Open Science (OS) aims, in part, to drive greater societal impact of academic research. Government, funder and institutional policies state that it should further democratize research and increase learning and awareness, evidence-based policy-making, the relevance of research to society’s problems, and public trust in research. Yet, measuring the societal impact of OS has proven challenging and synthesized evidence of it is lacking. This study fills this gap by systematically scoping the existing evidence of societal impact driven by OS and its various aspects, including Citizen Science (CS), Open Access (OA), Open/FAIR Data (OFD), Open Code/Software and others. Using the PRISMA Extension for Scoping Reviews and searches conducted in Web of Science, Scopus and relevant grey literature, we identified 196 studies that contain evidence of societal impact. The majority concern CS, with some focused on OA, and only a few addressing other aspects. Key areas of impact found are education and awareness, climate and environment, and social engagement. We found no literature documenting evidence of the societal impact of OFD and limited evidence of societal impact in terms of policy, health, and trust in academic research. Our findings demonstrate a critical need for additional evidence and suggest practical and policy implications…(More)”.

Real Chaos, Today! Are Randomized Controlled Trials a good way to do economics?


Article by Maia Mindel: “A few weeks back, there was much social media drama about this a paper titled: “Social Media and Job Market Success: A Field Experiment on Twitter” (2024) by Jingyi Qiu, Yan Chen, Alain Cohn, and Alvin Roth (recipient of the 2012 Nobel Prize in Economics). The study posted job market papers by economics PhDs, and then assigned prominent economists (who had volunteered) to randomly promote half of them on their profiles(more detail on this paper in a bit).

The “drama” in question was generally: “it is immoral to throw dice around on the most important aspect of a young economist’s career”, versus “no it’s not”. This, of course, awakened interest in a broader subject: Randomized Controlled Trials, or RCTs.

R.C.T. T.O. G.O.

Let’s go back to the 1600s – bloodletting was a common way to cure diseases. Did it work? Well, doctor Joan Baptista van Helmont had an idea: randomly divvy up a few hundred invalids into two groups, one of which got bloodletting applied, and another one that didn’t.

While it’s not clear this experiment ever happened, it sets up the basic principle of the randomized control trial: the idea here is that, to study the effects of a treatment, (in a medical context, a medicine; in an economics context, a policy), a sample group is divided between two: the control group, which does not receive any treatment, and the treatment group, which does. The modern randomized controlled (or control) trial has three “legs”: it’s randomized because who’s in each group gets chosen at random, it’s controlled because there’s a group that doesn’t get the treatment to serve as a counterfactual, and it’s a trial because you’re not developing “at scale” just yet.

Why could it be important to randomly select people for economic studies? Well, you want the only difference, on average, between the two groups to be whether or not they get the treatment. Consider military service: it’s regularly trotted out that drafting kids would reduce crime rates. Is this true? Well, the average person who is exempted from the draft could be, systematically, different than the average person who isn’t – for example, people who volunteer could be from wealthier families who are more patriotic, or poorer families who need certain benefits; or they could have physical disabilities that impede their labor market participation, or wealthier university students who get a deferral. But because many countries use lotteries to allocate draftees versus non draftees, you can get a group of people who are randomly assigned to the draft, and who on average should be similar enough to each other. One study in particular, about Argentina’s mandatory military service in pretty much all of the 20th century, finds that being conscripted raises the crime rate relative to people who didn’t get drafted through the lottery. This doesn’t mean that soldiers have higher crime rates than non soldiers, because of selection issues – but it does provide pretty good evidence that getting drafted is not good for your non-criminal prospects…(More)”.

The Essential Principle for Appropriate Data Policy of Citizen Science Projects


Chapter by Takeshi Osawa: “Citizen science is one of new paradigms of science. This concept features various project forms, participants, and motivations and implies the need for attention to ethical issues for every participant, which frequently includes nonacademics. In this chapter, I address ethical issues associated with citizen science projects that focus on the data treatment rule and demonstrate a concept on appropriate data policy for these projects. First, I demonstrate that citizen science projects tend to include different types of collaboration, which may lead to certain conflicts among participants in terms of data sharing. Second, I propose an idea that could integrate different types of collaboration according to the theory transcend. Third, I take a case of a citizen science project through which transcend occurred and elucidate the difference between ordinal research and citizen science projects, specifically in terms of the goals of these projects and the goals and motivations of participants, which may change. Finally, I proposed one conceptual idea on how the principal investigator (PI) of a citizen science project can establish data policy after assessing the rights of participants. The basic idea is the division and organization of the data policy in a hierarchy for the project and for the participants. Data policy is one of the important items for establishing the appropriate methods for citizen science as new style of science. As such, practice and framing related to data policy must be carefully monitored and reflected on…(More)”.

Embracing the Social in Social Science


Article by Jay Lloyd: “In a world where science is inextricably intermixed with society, the social sciences are essential to building trust in the scientific enterprise.

To begin thinking about why all the sciences should embrace the social in social science, I would like to start with cupcakes.

In my research, context is a recurring theme, so let me give you some context for cupcakes as metaphor. A few months ago, when I was asked to respond to an article in this magazine, I wrote: “In the production of science, social scientists can often feel like sprinkles on a cupcake: not essential. Social science is not the egg, the flour, or the sugar. Sprinkles are neither in the batter, nor do they see the oven. Sprinkles are a late addition. No matter the stylistic or aesthetic impact, they never alter the substance of the ‘cake’ in the cupcake.”

In writing these sentences, I was, and still am, hopeful that all kinds of future scientific research will make social science a key component of the scientific “batter” and bake social scientific knowledge, skill, and expertise into twenty-first-century scientific “cupcakes.”

But there are tensions and power differentials in the ways interdisciplinary science can be done. Most importantly, the formation of questions itself is a site of power. The questions we as a society ask science to address both reflect and create the values and power dynamics of social systems, whether the scientific disciplines recognize this influence or not. And some of those knowledge systems do not embrace the importance of insights from the social sciences because many institutions of science work hard to insulate the practice of science from the contingencies of society.

Moving forward, how do we, as researchers, develop questions that not only welcome intellectual variety within the sciences but also embrace the diversity represented in societies? As science continues to more powerfully blend, overlap, and intermix with society, embracing what social science can bring to the entire scientific enterprise is necessary. In order to accomplish these important goals, social concerns must be a key ingredient of the whole cupcake—not an afterthought, or decoration, but among the first thoughts…(More)”

Multi-disciplinary Perspectives on Citizen Science—Synthesizing Five Paradigms of Citizen Involvement


Paper by Susanne Beck, Dilek Fraisl, Marion Poetz and Henry Sauermann: “Research on Open Innovation in Science (OIS) investigates how open and collaborative practices influence the scientific and societal impact of research. Since 2019, the OIS Research Conference has brought together scholars and practitioners from diverse backgrounds to discuss OIS research and case examples. In this meeting report, we describe four session formats that have allowed our multi-disciplinary community to have productive discussions around opportunities and challenges related to citizen involvement in research. However, these sessions also highlight the need for a better understanding of the underlying rationales of citizen involvement in an increasingly diverse project landscape. Building on the discussions at the 2023 and prior editions of the conference, we outline a conceptual framework of five crowd paradigms and present an associated tool that can aid in understanding how citizen involvement in particular projects can help advance science. We illustrate this tool using cases presented at the 2023 conference, and discuss how it can facilitate discussions at future conferences as well as guide future research and practice in citizen science…(More)”.

Handbook on Public Policy and Artificial Intelligence


Book edited by Regine Paul, Emma Carmel and Jennifer Cobbe: “…explores the relationship between public policy and artificial intelligence (AI) technologies across a broad range of geographical, technical, political and policy contexts. It contributes to critical AI studies, focusing on the intersection of the norms, discourses, policies, practices and regulation that shape AI in the public sector.

Expert authors in the field discuss the creation and use of AI technologies, and how public authorities respond to their development, by bringing together emerging scholarly debates about AI technologies with longer-standing insights on public administration, policy, regulation and governance. Contributions in the Handbook mobilize diverse perspectives to critically examine techno-solutionist approaches to public policy and AI, dissect the politico-economic interests underlying AI promotion and analyse implications for sustainable development, fairness and equality. Ultimately, this Handbook questions whether regulatory concepts such as ethical, trustworthy or accountable AI safeguard a democratic future or contribute to a problematic de-politicization of the public sector…(More)”.