How social media and online communities influence climate change beliefs


Article by James Rice: “Psychological, social, and political forces all shape beliefs about climate change. Climate scientists bear a responsibility — not only as researchers and educators, but as public communicators — to guard against climate misinformation. This responsibility should be foundational, supported by economists, sociologists, and industry leaders.

While fake news manifests in various forms, not all forms of misinformation are created with the intent to deceive. Regardless of intent, climate misinformation threatens policy integrity. Strengthening environmental communication is thus crucial to counteract ideological divides that distort scientific discourse and weaken public trust.

Political polarisation, misinformation, and the erosion of scientific authority pose challenges demanding rigorous scholarship and proactive public engagement. Climate scientists, policymakers, and climate justice advocates must ensure scientific integrity while recognising that climate science operates in a politically charged landscape. Agnosticism and resignation, rather than resisting climate misinformation, are as dangerous as outright denial of climate science. Combating this extends beyond scientific accuracy. It requires strategic communication, engagement with advocacy groups, and the reinforcement of public trust in environmental expertise…(More)”.

Political Responsibility and Tech Governance


Book by Jude Browne: “Not a day goes by without a new story on the perils of technology: from increasingly clever machines that surpass human capability and comprehension to genetic technologies capable of altering the human genome in ways we cannot predict. How can we respond? What should we do politically? Focusing on the rise of robotics and artificial intelligence (AI), and the impact of new reproductive and genetic technologies (Repro-tech), Jude Browne questions who has political responsibility for the structural impacts of these technologies and how we might go about preparing for the far-reaching societal changes they may bring. This thought-provoking book tackles some of the most pressing issues of our time and offers a compelling vision for how we can respond to these challenges in a way that is both politically feasible and socially responsible…(More)”.

Cloze Encounters: The Impact of Pirated Data Access on LLM Performance


Paper by Stella Jia & Abhishek Nagaraj: “Large Language Models (LLMs) have demonstrated remarkable capabilities in text generation, but their performance may be influenced by the datasets on which they are trained, including potentially unauthorized or pirated content. We investigate the extent to which data access through pirated books influences LLM responses. We test the performance of leading foundation models (GPT, Claude, Llama, and Gemini) on a set of books that were and were not included in the Books3 dataset, which contains full-text pirated books and could be used for LLM training. We assess book-level performance using the “name cloze” word-prediction task. To examine the causal effect of Books3 inclusion we employ an instrumental variables strategy that exploits the pattern of book publication years in the Books3 dataset. In our sample of 12,916 books, we find significant improvements in LLM name cloze accuracy on books available within the Books3 dataset compared to those not present in these data. These effects are more pronounced for less popular books as compared to more popular books and vary across leading models. These findings have crucial implications for the economics of digitization, copyright policy, and the design and training of AI systems…(More)”.

Getting the Public on Side: How to Make Reforms Acceptable by Design


OECD Report: “Public acceptability is a crucial condition for the successful implementation of reforms. The challenges raised by the green, digital and demographic transitions call for urgent and ambitious policy action. Despite this, governments often struggle to build sufficiently broad public support for the reforms needed to promote change. Better information and effective public communication have a key role to play. But policymakers cannot get the public to choose the side of reform without a proper understanding of people’s views and how they can help strengthen the policy process.

Perceptual and behavioural data provide an important source of insights on the perceptions, attitudes and preferences that constitute the “demand-side” of reform. The interdisciplinary OECD Expert Group on New Measures of the Public Acceptability of Reforms was set up in 2021 to take stock of these insights and explore their potential for improving policy. This report reflects the outcomes of the Expert Group’s work. It looks at and assesses (i) the available data and what they can tell policymakers about people’s views; (ii) the analytical frameworks through which these data are interpreted; and (iii) the policy tools through which considerations of public acceptability are integrated into the reform process…(More)”.

Should AGI-preppers embrace DOGE?


Blog by Henry Farrell: “…AGI-prepping is reshaping our politics. Wildly ambitious claims for AGI have not only shaped America’s grand strategy, but are plausibly among the justifying reasons for DOGE.

After the announcement of DOGE, but before it properly got going, I talked to someone who was not formally affiliated, but was very definitely DOGE adjacent. I put it to this individual that tearing out the decision making capacities of government would not be good for America’s ability to do things in the world. Their response (paraphrased slightly) was: so what? We’ll have AGI by late 2026. And indeed, one of DOGE’s major ambitions, as described in a new article in WIRED, appears to have been to pull as much government information as possible into a large model that could then provide useful information across the totality of government.

The point – which I don’t think is understood nearly widely enough – is that radical institutional revolutions such as DOGE follow naturally from the AGI-prepper framework. If AGI is right around the corner, we don’t need to have a massive federal government apparatus, organizing funding for science via the National Science Foundation and the National Institute for Health. After all, in Amodei and Pottinger’s prediction:

By 2027, AI developed by frontier labs will likely be smarter than Nobel Prize winners across most fields of science and engineering. … It will be able to … complete complex tasks that would take people months or years, such as designing new weapons or curing diseases.

Who needs expensive and cumbersome bureaucratic institutions for organizing funding scientists in a near future where a “country of geniuses [will be] contained in a data center,” ready to solve whatever problems we ask them to? Indeed, if these bottled geniuses are cognitively superior to humans across most or all tasks, why do we need human expertise at all, beyond describing and explaining human wants? From this perspective, most human based institutions are obsolescing assets that need to be ripped out, and DOGE is only the barest of beginnings…(More)”.

Bubble Trouble


Article by Bryan McMahon: “…Venture capital (VC) funds, drunk on a decade of “growth at all costs,” have poured about $200 billion into generative AI. Making matters worse, the stock market’s bull run is deeply dependent on the growth of the Big Tech companies fueling the AI bubble. In 2023, 71 percent of the total gains in the S&P 500 were attributable to the “Magnificent Seven”—Apple, Nvidia, Tesla, Alphabet, Meta, Amazon, and Microsoft—all of which are among the biggest spenders on AI. Just four—Microsoft, Alphabet, Amazon, and Meta—combined for $246 billion of capital expenditure in 2024 to support the AI build-out. Goldman Sachs expects Big Tech to spend over $1 trillion on chips and data centers to power AI over the next five years. Yet OpenAI, the current market leader, expects to lose $5 billion this year, and its annual losses to swell to $11 billion by 2026. If the AI bubble bursts, it not only threatens to wipe out VC firms in the Valley but also blow a gaping hole in the public markets and cause an economy-wide meltdown…(More)”.

Integrating Social Media into Biodiversity Databases: The Next Big Step?


Article by Muhammad Osama: “Digital technologies and social media have transformed ecology and conservation biology data collection. Traditional biodiversity monitoring often relies on field surveys, which can be time-consuming and biased toward rural habitats.

The Global Biodiversity Information Facility (GBIF) serves as a key repository for biodiversity data, but it faces challenges such as delayed data availability and underrepresentation of urban habitats.

Social media platforms have become valuable tools for rapid data collection, enabling users to share georeferenced observations instantly, reducing time lags associated with traditional methods. The widespread use of smartphones with cameras allows individuals to document wildlife sightings in real-time, enhancing biodiversity monitoring. Integrating social media data with traditional ecological datasets offers significant advancements, particularly in tracking species distributions in urban areas.

In this paper, the authors evaluated the Jersey tiger moth’s habitat usage by comparing occurrence data from social media platforms (Instagram and Flickr) with traditional records from GBIF and iNaturalist. They hypothesized that social media data would reveal significant JTM occurrences in urban environments, which may be underrepresented in traditional datasets…(More)”.

The Language Data Space (LDS)


European Commission: “… welcomes launch of the Alliance for Language Technologies European Digital Infrastructure Consortium (ALT-EDIC) and the Language Data Space (LDS).

Aimed at addressing the shortage of European language data needed for training large language models, these projects are set to revolutionise multilingual Artificial Intelligence (AI) systems across the EU.

By offering services in all EU languages, the initiatives are designed to break down language barriers, providing better, more accessible solutions for smaller businesses within the EU. This effort not only aims to preserve the EU’s rich cultural and linguistic heritage in the digital age but also strengthens Europe’s quest for tech sovereignty. Formed in February 2024, the ALT-EDIC includes 17 participating Member States and 9 observer Member States and regions, making it one of the pioneering European Digital Infrastructure Consortia.

The LDS, part of the Common European Data Spaces, is crucial for increasing data availability for AI development in Europe. Developed by the Commission and funded by the DIGITAL programme,  this project aims to create a cohesive marketplace for language data. This will enhance the collection and sharing of multilingual data to support European large language models. Initially accessible to selected institutions and companies, the project aims to eventually involve all European public and private stakeholders.

Find more information about the Alliance for Language Technologies European Digital Infrastructure Consortium (ALT-EDIC) and the Language Data Space (LDS)…(More)”

New AI Collaboratives to take action on wildfires and food insecurity


Google: “…last September we introduced AI Collaboratives, a new funding approach designed to unite public, private and nonprofit organizations, and researchers, to create AI-powered solutions to help people around the world.

Today, we’re sharing more about our first two focus areas for AI Collaboratives: Wildfires and Food Security.

Wildfires are a global crisis, claiming more than 300,000 lives due to smoke exposure annually and causing billions of dollars in economic damage. …Google.org has convened more than 15 organizations, including Earth Fire Alliance and Moore Foundation, to help in this important effort. By coordinating funding and integrating cutting-edge science, emerging technology and on-the-ground applications, we can provide collaborators with the tools they need to identify and track wildfires in near real time; quantify wildfire risk; shift more acreage to beneficial fires; and ultimately reduce the damage caused by catastrophic wildfires.

Nearly one-third of the world’s population faces moderate or severe food insecurity due to extreme weather, conflict and economic shocks. The AI Collaborative: Food Security will strengthen the resilience of global food systems and improve food security for the world’s most vulnerable populations through AI technologies, collaborative research, data-sharing and coordinated action. To date, 10 organizations have joined us in this effort, and we’ll share more updates soon…(More)”.

Expanding the Horizons of Collective Artificial Intelligence (CAI): From Individual Nudges to Relational Cognition


Blog by Evelien Verschroeven: “As AI continues to evolve, it is essential to move beyond focusing solely on individual behavior changes. The individual input — whether through behavior, data, or critical content — remains important. New data and fresh perspectives are necessary for AI to continue learning, growing, and improving its relevance. However, as we head into what some are calling the golden years of AI, it’s critical to acknowledge a potential challenge: within five years, it is predicted that 50% of AI-generated content will be based on AI-created material, creating a risk of inbreeding where AI learns from itself, rather than from the diversity of human experience and knowledge.

Platforms like Google’s AI for Social Good and Unanimous AI’s Swarm play pivotal roles in breaking this cycle. By encouraging the aggregation of real-world data, they add new content that can influence and shape AI’s evolution. While they focus on individual data contributions, they also help keep AI systems grounded in real-world scenarios, ensuring that the content remains critical and diverse.

However, human oversight is key. AI systems, even with the best intentions, are still learning from patterns that humans provide. It’s essential that AI continues to receive diverse human input, so that its understanding remains grounded in real-world perspectives. AI should be continuously checked and guided by human creativity, critical thinking, and social contexts, to ensure that it doesn’t become isolated or too self-referential.

As we continue advancing AI, it is crucial to embrace relational cognition and collective intelligence. This approach will allow AI to address both individual and collective needs, enhancing not only personal development but also strengthening social bonds and fostering more resilient, adaptive communities…(More)”.