Book edited by Florian M. Federspiel, Gilberto Montibeller, and Matthias Seifert: “This book lays out a foundation and taxonomy for Behavioral Decision Analysis, featuring representative work across various domains. Traditional research in the domain of Decision Analysis has focused on the design and application of logically consistent tools to support decision makers during the process of structuring problem complexity, modeling uncertainty, generating predictions, eliciting preferences, and, ultimately, making better decisions. Two commonly held assumptions are that the decision maker’s cognitive belief system is fully accessible and that this system can be understood and formalized by trained analysts. However, in past years, an active line of research has emerged studying instances in which such assumptions may not hold. This book unites this community under the common theme of Behavioral Decision Analysis. The taxonomy used in this book categorizes research based on task focus (prediction or decision) and behavioral level (individual or group). Two theoretical lenses that lie at the interface between (1) normative and descriptive research, and (2) normative and prescriptive research are introduced. The book then proceeds to highlight representative works across the two lenses focused on individual and group-level decision making. Featuring various methodologies and applications, the book serves as a reference for researchers, students, and professionals across different disciplines with a common interest in Behavioral Decision Analysis…(More)”.
Science in the age of AI
Report by the Royal Society: “The unprecedented speed and scale of progress with artificial intelligence (AI) in recent years suggests society may be living through an inflection point. With the growing availability of large datasets, new algorithmic techniques and increased computing power, AI is becoming an established tool used by researchers across scientific fields who seek novel solutions to age-old problems. Now more than ever, we need to understand the extent of the transformative impact of AI on science and what scientific communities need to do to fully harness its benefits.
This report, Science in the age of AI (PDF), explores how AI technologies, such as deep learning or large language models, are transforming the nature and methods of scientific inquiry. It also explores how notions of research integrity; research skills or research ethics are inevitably changing, and what the implications are for the future of science and scientists.
The report addresses the following questions:
- How are AI-driven technologies transforming the methods and nature of scientific research?
- What are the opportunities, limitations, and risks of these technologies for scientific research?
- How can relevant stakeholders (governments, universities, industry, research funders, etc) best support the development, adoption, and uses of AI-driven technologies in scientific research?
In answering these questions, the report integrates evidence from a range of sources, including research activities with more than 100 scientists and the advisement of an expert Working group, as well as a taxonomy of AI in science (PDF), a historical review (PDF) on the role of disruptive technologies in transforming science and society, and a patent landscape review (PDF) of artificial intelligence related inventions, which are available to download…(More)”
Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science
Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China.
While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.
Participatory mapping as a social digital tool
Blog by María de los Ángeles Briones: “…we will use 14 different examples from different continents and contexts to explore the goals and methods used for participatory mapping as a social digital tool. Despite looking very different and coming from a range of cultural backgrounds, there are a number of similarities in these different case studies.
Although the examples have different goals, we have identified four main focus areas: activism, conviviality, networking and urban planning. More localised mapping projects often had a focus on activism. We also see from that maps are not isolated tools, they are complementary to work with other communication tools and platforms.
The internet has transformed communications and networks across the globe – allowing for interconnectivity and scalability of information among and between different groups of society. This allows voices, regardless of their location, to be amplified and listened to by many other voices achieving collective goals. This has great potential in a global world where it is evident that top-down initiatives are not enough to handle many of the social needs that local people experience. However, though the internet makes sharing and collaborating between people easier, offline maps are still valuable, as shown in some of our examples.
The similarity between the different maps that we explored is that they are social digital tools. They are social because they are related to projects that are seeking to solve social needs; and they are digital because they are based on digital platforms that permit them to be alive, spread, shared and used. These characteristics also refer to their function and design.
A tool can be defined as a device or implement, especially one held in the hand, used to carry out a particular function. So when we speak of a tool there are four things involved: an actor, an object, a function and a purpose. Just as a hammer is a tool that a carpenter (actor) use to hammer nails (function) and thus build something (purpose) we understand that social tools are used by one or more people for taking actions where the final objective is to meet a social need…(More)”.
Internet use statistically associated with higher wellbeing
Article by Oxford University: “Links between internet adoption and wellbeing are likely to be positive, despite popular concerns to the contrary, according to a major new international study from researchers at the Oxford Internet Institute, part of the University of Oxford.
The study encompassed more than two million participants psychological wellbeing from 2006-2021 across 168 countries, in relation to internet use and psychological well-being across 33,792 different statistical models and subsets of data, 84.9% of associations between internet connectivity and wellbeing were positive and statistically significant.
The study analysed data from two million individuals aged 15 to 99 in 168 countries, including Latin America, Asia, and Africa and found internet access and use was consistently associated with positive wellbeing.
Assistant Professor Matti Vuorre, Tilburg University and Research Associate, Oxford Internet Institute and Professor Andrew Przybylski, Oxford Internet Institute carried out the study to assess how technology relates to wellbeing in parts of the world that are rarely studied.
Professor Przybylski said: ‘Whilst internet technologies and platforms and their potential psychological consequences remain debated, research to date has been inconclusive and of limited geographic and demographic scope. The overwhelming majority of studies have focused on the Global North and younger people thereby ignoring the fact that the penetration of the internet has been, and continues to be, a global phenomenon’.
‘We set out to address this gap by analysing how internet access, mobile internet access and active internet use might predict psychological wellbeing on a global level across the life stages. To our knowledge, no other research has directly grappled with these issues and addressed the worldwide scope of the debate.’
The researchers studied eight indicators of well-being: life satisfaction, daily negative and positive experiences, two indices of social well-being, physical wellbeing, community wellbeing and experiences of purpose.
Commenting on the findings, Professor Vuorre said, “We were surprised to find a positive correlation between well-being and internet use across the majority of the thousands of models we used for our analysis.”
Whilst the associations between internet access and use for the average country was very consistently positive, the researchers did find some variation by gender and wellbeing indicators: The researchers found that 4.9% of associations linking internet use and community well-being were negative, with most of those observed among young women aged 15-24yrs.
Whilst not identified by the researchers as a causal relation, the paper notes that this specific finding is consistent with previous reports of increased cyberbullying and more negative associations between social media use and depressive symptoms among young women.
Adds Przybylski, ‘Overall we found that average associations were consistent across internet adoption predictors and wellbeing outcomes, with those who had access to or actively used the internet reporting meaningfully greater wellbeing than those who did not’…(More)” See also: A multiverse analysis of the associations between internet use and well-being
Big data for everyone
Article by Henrietta Howells: “Raw neuroimaging data require further processing before they can be used for scientific or clinical research. Traditionally, this could be accomplished with a single powerful computer. However, much greater computing power is required to analyze the large open-access cohorts that are increasingly being released to the community. And processing pipelines are inconsistently scripted, which can hinder reproducibility efforts. This creates a barrier for labs lacking access to sufficient resources or technological support, potentially excluding them from neuroimaging research. A paper by Hayashi and colleagues in Nature Methods offers a solution. They present https://brainlife.io, a freely available, web-based platform for secure neuroimaging data access, processing, visualization and analysis. It leverages ‘opportunistic computing’, which pools processing power from commercial and academic clouds, making it accessible to scientists worldwide. This is a step towards lowering the barriers for entry into big data neuroimaging research…(More)”.
Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges
Report by the President’s Council of Advisors on Science and Technology (PCAST): “Broadly speaking, scientific advances have historically proceeded via a combination of three paradigms: empirical studies and experimentation; scientific theory and mathematical analyses; and numerical experiments and modeling. In recent years a fourth paradigm, data-driven discovery, has emerged.
These four paradigms complement and support each other. However, all four scientific modalities experience impediments to progress. Verification of a scientific hypothesis through experimentation, careful observation, or via clinical trial can be slow and expensive. The range of candidate theories to consider can be too vast and complex for human scientists to analyze. Truly innovative new hypotheses might only be discovered by fortuitous chance, or by exceptionally insightful researchers. Numerical models can be inaccurate or require enormous amounts of computational resources. Data sets can be incomplete, biased, heterogeneous, or noisy to analyze using traditional data science methods.
AI tools have obvious applications in data-driven science, but it has also been a long-standing aspiration to use these technologies to remove, or at least reduce, many of the obstacles encountered in the other three paradigms. With the current advances in AI, this dream is on the cusp of becoming a reality: candidate solutions to scientific problems are being rapidly identified, complex simulations are being enriched, and robust new ways of analyzing data are being developed.
By combining AI with the other three research modes, the rate of scientific progress will be greatly accelerated, and researchers will be positioned to meet urgent global challenges in a timely manner. Like most technologies, AI is dual use: AI technology can facilitate both beneficial and harmful applications and can cause unintended negative consequences if deployed irresponsibly or without expert and ethical human supervision. Nevertheless, PCAST sees great potential for advances in AI to accelerate science and technology for the benefit of society and the planet. In this report, we provide a high-level vision for how AI, if used responsibly, can transform the way that science is done, expand the boundaries of human knowledge, and enable researchers to find solutions to some of society’s most pressing problems…(More)”
Digital ethnography: A qualitative approach to digital cultures, spaces, and socialites
Paper by Coppélie Cocq and Evelina Liliequist: “This paper introduces principles for the application and challenges of small data ethnography in digital research. It discusses the need to incorporate ethics in every step of the research process. As teachers and researchers within the digital humanities, we argue for the value of a qualitative approach to digital contents, spaces, and phenomena. This article is relevant as a guide for students and researchers whose studies examine digital practices, phenomena, and social communities that occur in, through, or in relation to digital contexts…(More)”. See also: Digital Ethnography Data Innovation Primer.
Automated Social Science: Language Models as Scientist and Subjects
Paper by Benjamin S. Manning, Kehang Zhu & John J. Horton: “We present an approach for automatically generating and testing, in silico, social scientific hypotheses. This automation is made possible by recent advances in large language models (LLM), but the key feature of the approach is the use of structural causal models. Structural causal models provide a language to state hypotheses, a blueprint for constructing LLM-based agents, an experimental design, and a plan for data analysis. The fitted structural causal model becomes an object available for prediction or the planning of follow-on experiments. We demonstrate the approach with several scenarios: a negotiation, a bail hearing, a job interview, and an auction. In each case, causal relationships are both proposed and tested by the system, finding evidence for some and not others. We provide evidence that the insights from these simulations of social interactions are not available to the LLM purely through direct elicitation. When given its proposed structural causal model for each scenario, the LLM is good at predicting the signs of estimated effects, but it cannot reliably predict the magnitudes of those estimates. In the auction experiment, the in silico simulation results closely match the predictions of auction theory, but elicited predictions of the clearing prices from the LLM are inaccurate. However, the LLM’s predictions are dramatically improved if the model can condition on the fitted structural causal model. In short, the LLM knows more than it can (immediately) tell…(More)”.
The economic research policymakers actually need
Blog by Jed Kolko: “…The structure of academia just isn’t set up to produce the kind of research many policymakers need. Instead, top academic journal editors and tenure committees reward research that pushes the boundaries of the discipline and makes new theoretical or empirical contributions. And most academic papers presume familiarity with the relevant academic literature, making it difficult for anyone outside of academia to make the best possible use of them.
The most useful research often came instead from regional Federal Reserve banks, non-partisan think-tanks, the corporate sector, and from academics who had the support, freedom, or job security to prioritize policy relevance. It generally fell into three categories:
- New measures of the economy
- Broad literature reviews
- Analyses that directly quantify or simulate policy decisions.
If you’re an economic researcher and you want to do work that is actually helpful for policymakers — and increases economists’ influence in government — aim for one of those three buckets.
The pandemic and its aftermath brought an urgent need for data at higher frequency, with greater geographic and sectoral detail, and about ways the economy suddenly changed. Some of the most useful research contributions during that period were new data and measures of the economy: they were valuable as ingredients rather than as recipes or finished meals. Here are some examples:
- An analysis of which jobs could be done remotely. This was published in April 2020, near the start of the pandemic, and inspired much of the early understanding of the prevalence and inequities of remote work.
- An estimate of how much the weather affects monthly employment changes. This is increasingly important for separating underlying economic trends from short-term swings from unseasonable or extreme weather.
- A measure of supply chain conditions. This helped quantify the challenges of getting goods into the US and to their customers during the pandemic.
- Job postings data from Indeed (where I worked as chief economist prior to my government service) showed hiring needs more quickly and in more geographic and occupational detail than official government statistics.
- Market-rent data from Zillow. This provided a useful leading indicator of the housing component of official inflation measures…(More)”.