Unpacking OpenAI’s Amazonian Archaeology Initiative


Article by Lori Regattieri: “What if I told you that one of the most well-capitalized AI companies on the planet is asking volunteers to help them uncover “lost cities” in the Amazonia—by feeding machine learning models with open satellite data, lidar, “colonial” text and map records, and indigenous oral histories? This is the premise of the OpenAI to Z Challenge, a Kaggle-hosted hackathon framed as a platform to “push the limits” of AI through global knowledge cooperation. In practice, this is a product development experiment cloaked as public participation. The contributions of users, the mapping of biocultural data, and the modeling of ancestral landscapes all feed into the refinement of OpenAI’s proprietary systems. The task itself may appear novel. The logic is not. This is the familiar playbook of Big Tech firms—capture public knowledge, reframe it as open input, and channel it into infrastructure that serves commercial, rather than communal goals.

The “challenge” is marketed as a “digital archaeology” experiment, it invites participants from all around the world to search for “hidden” archaeological sites in the Amazonia biome (Brazil, Bolivia, Columbia, Ecuador, Guyana, Peru, Suriname, Venezuela, and French Guiana) using a curated stack of open-source data. The competition requires participants to use OpenAI’s latest GPT-4.1 and the o3/o4-mini models to parse multispectral satellite imagery, LiDAR-derived elevation maps (Light Detection and Ranging is a remote sensing technology that uses laser pulses to generate high-resolution 3D models of terrain, including areas covered by dense vegetation), historical maps, and digitized ethnographic archives. The coding teams or individuals need to geolocate “potential” archaeological sites, argue their significance using verifiable public sources, and present reproducible methodologies. Prize incentives total $400,000 USD, with a first-place award of $250,000 split between cash and OpenAI API credits.

While framed as a novel invitation to “anyone” to do archaeological research, the competition focuses mainly on the Brazilian territory, transforming the Amazonia and its peoples into an open laboratory for model testing. What is presented as scientific crowdsourcing is in fact a carefully designed mechanism for refining geospatial AI at scale. Participants supply not just labor and insight, but novel training and evaluation strategies that extend far beyond heritage science and into the commercial logics of spatial computing…(More)”.

Will AI speed up literature reviews or derail them entirely?


Article by Sam A. Reynolds: “Over the past few decades, evidence synthesis has greatly increased the effectiveness of medicine and other fields. The process of systematically combining findings from multiple studies into comprehensive reviews helps researchers and policymakers to draw insights from the global literature1. AI promises to speed up parts of the process, including searching and filtering. It could also help researchers to detect problematic papers2. But in our view, other potential uses of AI mean that many of the approaches being developed won’t be sufficient to ensure that evidence syntheses remain reliable and responsive. In fact, we are concerned that the deployment of AI to generate fake papers presents an existential crisis for the field.

What’s needed is a radically different approach — one that can respond to the updating and retracting of papers over time.

We propose a network of continually updated evidence databases, hosted by diverse institutions as ‘living’ collections. AI could be used to help build the databases. And each database would hold findings relevant to a broad theme or subject, providing a resource for an unlimited number of ultra-rapid and robust individual reviews…

Currently, the gold standard for evidence synthesis is the systematic review. These are comprehensive, rigorous, transparent and objective, and aim to include as much relevant high-quality evidence as possible. They also use the best methods available for reducing bias. In part, this is achieved by getting multiple reviewers to screen the studies; declaring whatever criteria, databases, search terms and so on are used; and detailing any conflicts of interest or potential cognitive biases…(More)”.

Red Teaming Artificial Intelligence for Social Good


UNESCO Report: “Generative Artificial Intelligence (Gen AI) has become an integral part of our digital landscape and daily life. Understanding its risks and participating in solutions is crucial to ensuring that it works for the overall social good. This PLAYBOOK introduces Red Teaming as an accessible tool for testing and evaluating AI systems for social good, exposing stereotypes, bias and potential harms. As a way of illustrating harms, practical examples of Red Teaming for social good are provided, building on the collaborative work carried out by UNESCO and Humane Intelligence. The results demonstrate forms of technology-facilitated gender-based violence (TFGBV) enabled by Gen AI and provide practical actions and recommendations on how to address these growing concerns.

Red Teaming — the practice of intentionally testing Gen AI models to expose vulnerabilities — has traditionally been used by major tech companies and AI labs. One tech company surveyed 1,000 machine learning engineers and found that 89% reported vulnerabilities (Aporia, 2024). This PLAYBOOK provides access to these critical testing methods, enabling organizations and communities to actively participate. Through the structured exercises and real-world scenarios provided, participants can systematically evaluate how Gen AI models may perpetuate, either intentionally or unintentionally, stereotypes or enable gender-based violence.By providing organizations with this easy-to-use tool to conduct their own Red Teaming exercises, participants can select their own thematic area of concern, enabling evidence-based advocacy for more equitable AI for social good…(More)”.

AI companies start winning the copyright fight


Article by Blake Montgomery: “…tech companies notched several victories in the fight over their use of copyrighted text to create artificial intelligence products.

Anthropic: A US judge has ruled that Anthropic, maker of the Claude chatbot, use of books to train its artificial intelligence system – without permission of the authors – did not breach copyright law. Judge William Alsup compared the Anthropic model’s use of books to a “reader aspiring to be a writer.”

And the next day, Meta: The US district judge Vince Chhabria, in San Francisco, said in his decision on the Meta case that the authors had not presented enough evidence that the technology company’s AI would cause “market dilution” by flooding the market with work similar to theirs.

The same day that Meta received its favorable ruling, a group of writers sued Microsoft, alleging copyright infringement in the creation of that company’s Megatron text generator. Judging by the rulings in favor of Meta and Anthropic, the authors are facing an uphill battle.

These three cases are skirmishes in the wider legal war over copyrighted media, which rages on. Three weeks ago, Disney and NBCUniversal sued Midjourney, alleging that the company’s namesake AI image generator and forthcoming video generator made illegal use of the studios’ iconic characters like Darth Vader and the Simpson family. The world’s biggest record labels – Sony, Universal and Warner – have sued two companies that make AI-powered music generators, Suno and Udio. On the textual front, the New York Times’ suit against OpenAI and Microsoft is ongoing.

The lawsuits over AI-generated text were filed first, and, as their rulings emerge, the next question in the copyright fight is whether decisions about one type of media will apply to the next.

“The specific media involved in the lawsuit – written works versus images versus videos versus audio – will certainly change the fair-use analysis in each case,” said John Strand, a trademark and copyright attorney with the law firm Wolf Greenfield. “The impact on the market for the copyrighted works is becoming a key factor in the fair-use analysis, and the market for books is different than that for movies.”…(More)”.

Money, Power and AI


Open Access Book edited by Zofia Bednarz and Monika Zalnieriute: “… bring together leading experts to shed light on how artificial intelligence (AI) and automated decision-making (ADM) create new sources of profits and power for financial firms and governments. Chapter authors—which include public and private lawyers, social scientists, and public officials working on various aspects of AI and automation across jurisdictions—identify mechanisms, motivations, and actors behind technology used by Automated Banks and Automated States, and argue for new rules, frameworks, and approaches to prevent harms that result from the increasingly common deployment of AI and ADM tools. Responding to the opacity of financial firms and governments enabled by AI, Money, Power and AI advances the debate on scrutiny of power and accountability of actors who use this technology…(More)”.

A New Social Contract for AI? Comparing CC Signals and the Social License for Data Reuse


Article by Stefaan Verhulst: “Last week, Creative Commons — the global nonprofit best known for its open copyright licenses — released “CC Signals: A New Social Contract for the Age of AI.” This framework seeks to offer creators a means to signal their preferences for how their works are used in machine learning, including training Artificial Intelligence systems. It marks an important step toward integrating re-use preferences and shared benefits directly into the AI development lifecycle….

From a responsible AI perspective, the CC Signals framework is an important development. It demonstrates how soft governance mechanisms — declarations, usage expressions, and social signaling — can supplement or even fill gaps left by inconsistent global copyright regimes in the context of AI. At the same time, this initiative provides an interesting point of comparison with our ongoing work to develop a Social License for Data Reuse. A social license for data reuse is a participatory governance framework that allows communities to collectively define, signal and enforce the conditions under which data about them can be reused — including training AI. Unlike traditional consent-based mechanisms, which focus on individual permissions at the point of collection, a social license introduces a community-centered, continuous process of engagement — ensuring that data practices align with shared values, ethical norms, and contextual realities. It provides a complementary layer to legal compliance, emphasizing trust, legitimacy, and accountability in data governance.

While both frameworks are designed to signal preferences and expectations for data or content reuse, they differ meaningfully in scope, method, and theory of change.

Below, we offer a comparative analysis of the two frameworks — highlighting how each approaches the challenge of embedding legitimacy and trust into AI and data ecosystems…(More)”.

AI and Assembly: Coming Together and Apart in a Datafied World


Book edited by Toussaint Nothias and Lucy Bernholz: “Artificial intelligence has moved from the lab into everyday life and is now seemingly everywhere. As AI creeps into every aspect of our lives, the data grab required to power AI also expands. People worldwide are tracked, analyzed, and influenced, whether on or off their screens, inside their homes or outside in public, still or in transit, alone or together. What does this mean for our ability to assemble with others for collective action, including protesting, holding community meetings and organizing rallies ? In this context, where and how does assembly take place, and who participates by choice and who by coercion? AI and Assembly explores these questions and offers global perspectives on the present and future of assembly in a world taken over by AI.

The contributors analyze how AI threatens free assembly by clustering people without consent, amplifying social biases, and empowering authoritarian surveillance. But they also explore new forms of associational life that emerge in response to these harms, from communities in the US conducting algorithmic audits to human rights activists in East Africa calling for biometric data protection and rideshare drivers in London advocating for fair pay. Ultimately, AI and Assembly is a rallying cry for those committed to a digital future beyond the narrow horizon of corporate extraction and state surveillance…(More)”.

Beyond AI and Copyright


White Paper by Paul Keller: “…argues for interventions to ensure the sustainability of the information ecosystem in the age of generative AI. Authored by Paul Keller, the paper builds on Open Future’s ongoing work on Public AI and on AI and creative labour, and proposes measures aimed at ensuring a healthy and equitable digital knowledge commons.

Rather than focusing on the rights of individual creators or the infringement debates that dominate current policy discourse, the paper frames generative AI as a new cultural and social technology—one that is rapidly reshaping how societies access, produce, and value information. It identifies two major structural risks: the growing concentration of control over knowledge, and the hollowing out of the institutions and economies that sustain human information production.

To counter these risks, the paper calls for the development of public AI infrastructures and a redistributive mechanism based on a levy on commercial AI systems trained on publicly available information. The proceeds would support not only creators and rightholders, but also public service media, cultural heritage institutions, open content platforms, and the development of Public AI systems…(More)”.

Why AI hardware needs to be open


Article by Ayah Bdeir: “Once again, the future of technology is being engineered in secret by a handful of people and delivered to the rest of us as a sealed, seamless, perfect device. When technology is designed in secrecy and sold to us as a black box, we are reduced to consumers. We wait for updates. We adapt to features. We don’t shape the tools; they shape us. 

This is a problem. And not just for tinkerers and technologists, but for all of us.

We are living through a crisis of disempowerment. Children are more anxious than ever; the former US surgeon general described a loneliness epidemic; people are increasingly worried about AI eroding education. The beautiful devices we use have been correlated with many of these trends. Now AI—arguably the most powerful technology of our era—is moving off the screen and into physical space. 

The timing is not a coincidence. Hardware is having a renaissance. Every major tech company is investing in physical interfaces for AI. Startups are raising capital to build robots, glasses, wearables that are going to track our every move. The form factor of AI is the next battlefield. Do we really want our future mediated entirely through interfaces we can’t open, code we can’t see, and decisions we can’t influence? 

This moment creates an existential opening, a chance to do things differently. Because away from the self-centeredness of Silicon Valley, a quiet, grounded sense of resistance is reactivating. I’m calling it the revenge of the makers. 

In 2007, as the iPhone emerged, the maker movement was taking shape. This subculture advocates for learning-through-making in social environments like hackerspaces and libraries. DIY and open hardware enthusiasts gathered in person at Maker Faires—large events where people of all ages tinkered and shared their inventions in 3D printing, robotics, electronics, and more. Motivated by fun, self-fulfillment, and shared learning, the movement birthed companies like MakerBot, Raspberry Pi, Arduino, and (my own education startup) littleBits from garages and kitchen tables. I myself wanted to challenge the notion that technology had to be intimidating or inaccessible, creating modular electronic building blocks designed to put the power of invention in the hands of everyone…(More)”

5 Ways Cooperatives Can Shape the Future of AI


Article by Trebor Scholz and Stefano Tortorici: “Today, AI development is controlled by a small cadre of firms. Companies like OpenAI, Alphabet, Amazon, Meta, and Microsoft dominate through vast computational resources, massive proprietary datasets, deep pools of technical talent, extractive data practices, low-cost labor, and capital that enables continuous experimentation and rapid deployment. Even open-source challengers like DeepSeek run on vast computational muscle and industrial training pipelines.

This domination brings problems: privacy violation and cost-minimizing labor strategies, high environmental costs from data centers, and evident biases in models that can reinforce discrimination in hiring, healthcare, credit scoring, policing, and beyond. These problems tend to affect the people who are already too often left out. AI’s opaque algorithms don’t just sidestep democratic control and transparency—they shape who gets heard, who’s watched, and who’s quietly pushed aside.

Yet, as companies consider using this technology, it can seem that there are few other options. As such, it can seem that they are locked into these compromises.

A different model is taking shape, however, with little fanfare, but with real potential. AI cooperatives—organizations developing or governing AI technologies based on cooperative principles—offer a promising alternative. The cooperative movement, with its global footprint and diversity of models, has been successful from banking and agriculture to insurance and manufacturing. Cooperatives enterprises, which are owned and governed by their members, have long managed infrastructure for the public good.

A handful of AI cooperatives offer early examples of how democratic governance and shared ownership could shape more accountable and community-centered uses of the technology. Most are large agricultural cooperatives that are putting AI to use in their day-to-day operations, such as IFFCO’s DRONAI program (AI for fertilization), FrieslandCampina (dairy quality control), and Fonterra (milk production analytics). Cooperatives must urgently organize to challenge AI’s dominance or remain on the sidelines of critical political and technological developments.​

There is undeniably potential here, for both existing cooperatives and companies that might want to partner with them. The $589 billion drop in Nvidia’s market cap DeepSeek triggered shows how quickly open-source innovation can shift the landscape. But for cooperative AI labs to do more than signal intent, they need public infrastructure, civic partnerships, and serious backing…(More)”.