Integrating Social Media into Biodiversity Databases: The Next Big Step?


Article by Muhammad Osama: “Digital technologies and social media have transformed ecology and conservation biology data collection. Traditional biodiversity monitoring often relies on field surveys, which can be time-consuming and biased toward rural habitats.

The Global Biodiversity Information Facility (GBIF) serves as a key repository for biodiversity data, but it faces challenges such as delayed data availability and underrepresentation of urban habitats.

Social media platforms have become valuable tools for rapid data collection, enabling users to share georeferenced observations instantly, reducing time lags associated with traditional methods. The widespread use of smartphones with cameras allows individuals to document wildlife sightings in real-time, enhancing biodiversity monitoring. Integrating social media data with traditional ecological datasets offers significant advancements, particularly in tracking species distributions in urban areas.

In this paper, the authors evaluated the Jersey tiger moth’s habitat usage by comparing occurrence data from social media platforms (Instagram and Flickr) with traditional records from GBIF and iNaturalist. They hypothesized that social media data would reveal significant JTM occurrences in urban environments, which may be underrepresented in traditional datasets…(More)”.

New AI Collaboratives to take action on wildfires and food insecurity


Google: “…last September we introduced AI Collaboratives, a new funding approach designed to unite public, private and nonprofit organizations, and researchers, to create AI-powered solutions to help people around the world.

Today, we’re sharing more about our first two focus areas for AI Collaboratives: Wildfires and Food Security.

Wildfires are a global crisis, claiming more than 300,000 lives due to smoke exposure annually and causing billions of dollars in economic damage. …Google.org has convened more than 15 organizations, including Earth Fire Alliance and Moore Foundation, to help in this important effort. By coordinating funding and integrating cutting-edge science, emerging technology and on-the-ground applications, we can provide collaborators with the tools they need to identify and track wildfires in near real time; quantify wildfire risk; shift more acreage to beneficial fires; and ultimately reduce the damage caused by catastrophic wildfires.

Nearly one-third of the world’s population faces moderate or severe food insecurity due to extreme weather, conflict and economic shocks. The AI Collaborative: Food Security will strengthen the resilience of global food systems and improve food security for the world’s most vulnerable populations through AI technologies, collaborative research, data-sharing and coordinated action. To date, 10 organizations have joined us in this effort, and we’ll share more updates soon…(More)”.

Expanding the Horizons of Collective Artificial Intelligence (CAI): From Individual Nudges to Relational Cognition


Blog by Evelien Verschroeven: “As AI continues to evolve, it is essential to move beyond focusing solely on individual behavior changes. The individual input — whether through behavior, data, or critical content — remains important. New data and fresh perspectives are necessary for AI to continue learning, growing, and improving its relevance. However, as we head into what some are calling the golden years of AI, it’s critical to acknowledge a potential challenge: within five years, it is predicted that 50% of AI-generated content will be based on AI-created material, creating a risk of inbreeding where AI learns from itself, rather than from the diversity of human experience and knowledge.

Platforms like Google’s AI for Social Good and Unanimous AI’s Swarm play pivotal roles in breaking this cycle. By encouraging the aggregation of real-world data, they add new content that can influence and shape AI’s evolution. While they focus on individual data contributions, they also help keep AI systems grounded in real-world scenarios, ensuring that the content remains critical and diverse.

However, human oversight is key. AI systems, even with the best intentions, are still learning from patterns that humans provide. It’s essential that AI continues to receive diverse human input, so that its understanding remains grounded in real-world perspectives. AI should be continuously checked and guided by human creativity, critical thinking, and social contexts, to ensure that it doesn’t become isolated or too self-referential.

As we continue advancing AI, it is crucial to embrace relational cognition and collective intelligence. This approach will allow AI to address both individual and collective needs, enhancing not only personal development but also strengthening social bonds and fostering more resilient, adaptive communities…(More)”.

An immersive technologies policy primer


OECD Policy Primer: “Immersive technologies, such as augmented reality, digital twins and virtual worlds, offer innovative ways to interact with information and the environment by engaging one’s senses. This paper explores potential benefits of these technologies, from innovative commercial applications to addressing societal challenges. It also highlights potential risks, such as extensive data collection, mental or physical risks from misuse, and emerging cyber threats. It outlines policy opportunities and challenges in maximising these benefits while mitigating risks, with real-world use cases in areas like remote healthcare and education for people with disabilities. The paper emphasises the critical role of anticipatory governance and international collaboration in shaping the human-centric and values-based development and use of immersive technologies…(More)”.

Large AI models are cultural and social technologies


Essay by Henry Farrell, Alison Gopnik, Cosma Shalizi, and James Evans: “Debates about artificial intelligence (AI) tend to revolve around whether large models are intelligent, autonomous agents. Some AI researchers and commentators speculate that we are on the cusp of creating agents with artificial general intelligence (AGI), a prospect anticipated with both elation and anxiety. There have also been extensive conversations about cultural and social consequences of large models, orbiting around two foci: immediate effects of these systems as they are currently used, and hypothetical futures when these systems turn into AGI agents perhaps even superintelligent AGI agents.

But this discourse about large models as intelligent agents is fundamentally misconceived. Combining ideas from social and behavioral sciences with computer science can help us understand AI systems more accurately. Large Models should not be viewed primarily as intelligent agents, but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated.

The new technology of large models combines important features of earlier technologies. Like pictures, writing, print, video, Internet search, and other such technologies, large models allow people to access information that other people have created. Large Models – currently language, vision, and multi-modal depend on the fact that the Internet has made the products of these earlier technologies readily available in machine-readable form. But like economic markets, state bureaucracies, and other social technologies, these systems not only make information widely available, they allow it to be reorganized, transformed, and restructured in distinctive ways. Adopting Herbert Simon’s terminology, large models are a new variant of the “artificial systems of human society” that process information to enable large-scale coordination…(More)”

Can small language models revitalize Indigenous languages?


Article by Brooke Tanner and Cameron F. Kerry: “Indigenous languages play a critical role in preserving cultural identity and transmitting unique worldviews, traditions, and knowledge, but at least 40% of the world’s 6,700 languages are currently endangered. The United Nations declared 2022-2032 as the International Decade of Indigenous Languages to draw attention to this threat, in hopes of supporting the revitalization of these languages and preservation of access to linguistic resources.  

Building on the advantages of SLMs, several initiatives have successfully adapted these models specifically for Indigenous languages. Such Indigenous language models (ILMs) represent a subset of SLMs that are designed, trained, and fine-tuned with input from the communities they serve. 

Case studies and applications 

  • Meta released No Language Left Behind (NLLB-200), a 54 billion–parameter open-source machine translation model that supports 200 languages as part of Meta’s universal speech translator project. The model includes support for languages with limited translation resources. While the model’s breadth of languages included is novel, NLLB-200 can struggle to capture the intricacies of local context for low-resource languages and often relies on machine-translated sentence pairs across the internet due to the scarcity of digitized monolingual data. 
  • Lelapa AI’s InkubaLM-0.4B is an SLM with applications for low-resource African languages. Trained on 1.9 billion tokens across languages including isiZulu, Yoruba, Swahili, and isiXhosa, InkubaLM-0.4B (with 400 million parameters) builds on Meta’s LLaMA 2 architecture, providing a smaller model than the original LLaMA 2 pretrained model with 7 billion parameters. 
  • IBM Research Brazil and the University of São Paulo have collaborated on projects aimed at preserving Brazilian Indigenous languages such as Guarani Mbya and Nheengatu. These initiatives emphasize co-creation with Indigenous communities and address concerns about cultural exposure and language ownership. Initial efforts included electronic dictionaries, word prediction, and basic translation tools. Notably, when a prototype writing assistant for Guarani Mbya raised concerns about exposing their language and culture online, project leaders paused further development pending community consensus.  
  • Researchers have fine-tuned pre-trained models for Nheengatu using linguistic educational sources and translations of the Bible, with plans to incorporate community-guided spellcheck tools. Since the translations relying on data from the Bible, primarily translated by colonial priests, often sounded archaic and could reflect cultural abuse and violence, they were classified as potentially “toxic” data that would not be used in any deployed system without explicit Indigenous community agreement…(More)”.

Beyond Answers Presented by AI: Unlocking Innovation and Problem Solving Through A New Science of Questions


Paper by Stefaan Verhulst and Hannah Chafetz: “Today’s global crises–from climate change to inequality–have demonstrated the need for a broader conceptual transformation in how to approach societal issues. Focusing on the questions can transform our understanding of today’s problems and unlock new discoveries and innovations that make a meaningful difference. Yet, how decision-makers go about asking questions remains an underexplored topic. 

Much of our recent work has focused on advancing a new science of questions that uses participatory approaches to define and prioritize the questions that matter most. As part of this work, we convened an Interdisciplinary Committee on Establishing and Democratizing the Science of Questions to discuss why questions matter for society and the actions needed to build a movement around this new science. 

In this article, we provide the main findings from these gatherings. First we outline several roles that questions can play in shaping policy, research innovation. Supported by real-world examples, we discuss how questions are a critical device for setting agendas, increasing public participation, improving coordination, and more. We then provide five key challenges in developing a systematic approach to questions raised by the Committee and potential solutions to address those challenges. Existing challenges include weak recognition of questions, lack of skills and lack of consensus on what makes a good question. 

In the latter part of this piece, we propose the concept of The QLab–a global center dedicated to the research and practice of asking questions. Co-developed with the Committee, the QLab would include five core functions: Thought Leadership, Architecting the Discovery of Questions, Field Building, Institutionalization and Practice, and Research on Questioning. By focusing on these core functions, The QLab can make significant progress towards establishing a field dedicated to the art and science of asking questions…(More)”.

A Quest for AI Knowledge


Paper by Joshua S. Gans: “This paper examines how the introduction of artificial intelligence (AI), particularly generative and large language models capable of interpolating precisely between known data points, reshapes scientists’ incentives for pursuing novel versus incremental research. Extending the theoretical framework of Carnehl and Schneider (2025), we analyse how decision-makers leverage AI to improve precision within well-defined knowledge domains. We identify conditions under which the availability of AI tools encourages scientists to choose more socially valuable, highly novel research projects, contrasting sharply with traditional patterns of incremental knowledge growth. Our model demonstrates a critical complementarity: scientists strategically align their research novelty choices to maximise the domain where AI can reliably inform decision-making. This dynamic fundamentally transforms the evolution of scientific knowledge, leading either to systematic “stepping stone” expansions or endogenous research cycles of strategic knowledge deepening. We discuss the broader implications for science policy, highlighting how sufficiently capable AI tools could mitigate traditional inefficiencies in scientific innovation, aligning private research incentives closely with the social optimum…(More)”.

Climate Assemblies and the Law: A Research Roadmap


Article by Leslie Anne and Duvic Paoli: “The article is interested in the relationship between citizens’ assemblies on climate change (‘climate assemblies’) and the law. It offers a research roadmap on the legal dimensions of climate assemblies with the view to advancing our knowledge of deliberative climate governance. The article explores six fundamental areas of inquiry on which legal scholarship can offer relevant insights. They relate to: i) understanding the outcomes of climate assemblies; ii) clarifying their role in the public law relationship between individuals and government; iii) gaining insights into the making of climate legislation and other rules; iv) exploring the societal authority of norms; v) illustrating the transnational governance of climate change, including the diffusion of its norms and vi) offering a testing ground for the design of legal systems that are more ecologically and socially just. The aim is to nudge legal scholars into exploring the richness of the questions raised by the emergence of climate assemblies and, in turn, to encourage other social science scholars to reflect on how the legal perspective might contribute to better understanding their object of study…(More)”.

Nudges and Nudging: A User’s Manual


Paper by Cass Sunstein: “Many policies take the form of nudges, defined as liberty-preserving approaches that steer people in particular directions, but that also allow them to go their own way Some nudges attempt to correct self-control problems. Some nudges attempt to counteract unrealistic optimism. Some nudges attempt to correct present bias. Some nudges attempt to correct market failures, as when people are nudged not to emit air pollution. For every conventional market failure, there is a potential nudge. For every behavioral bias (optimistic bias, present bias, availability bias, limited attention), there is a responsive nudge. There are many misconceptions about nudges and nudging, and they are a diversion…(More)”.