Surveillance pricing: How your data determines what you pay


Article by Douglas Crawford: “Surveillance pricing, also known as personalized or algorithmic pricing, is a practice where companies use your personal data, such as your location, the device you’re using, your browsing history, and even your income, to determine what price to show you. It’s not just about supply and demand — it’s about you as a consumer and how much the system thinks you’re able (or willing) to pay.

Have you ever shopped online for a flight(new window), only to find that the price mysteriously increased the second time you checked? Or have you and a friend searched for the same hotel room on your phones, only to find your friend sees a lower price? This isn’t a glitch — it’s surveillance pricing at work.

In the United States, surveillance pricing is becoming increasingly prevalent across various industries, including airlines, hotels, and e-commerce platforms. It exists elsewhere, but in other parts of the world, such as the European Union, there is a growing recognition of the danger this pricing model presents to citizens’ privacy, resulting in stricter data protection laws aimed at curbing it. The US appears to be moving in the opposite direction…(More)”.

Collective Bargaining in the Information Economy Can Address AI-Driven Power Concentration


Position paper by Nicholas Vincent, Matthew Prewitt and Hanlin Li: “…argues that there is an urgent need to restructure markets for the information that goes into AI systems. Specifically, producers of information goods (such as journalists, researchers, and creative professionals) need to be able to collectively bargain with AI product builders in order to receive reasonable terms and a sustainable return on the informational value they contribute. We argue that without increased market coordination or collective bargaining on the side of these primary information producers, AI will exacerbate a large-scale “information market failure” that will lead not only to undesirable concentration of capital, but also to a potential “ecological collapse” in the informational commons. On the other hand, collective bargaining in the information economy can create market frictions and aligned incentives necessary for a pro-social, sustainable AI future. We provide concrete actions that can be taken to support a coalitionbased approach to achieve this goal. For example, researchers and developers can establish technical mechanisms such as federated data management tools and explainable data value estimations, to inform and facilitate collective bargaining in the information economy. Additionally, regulatory and policy interventions may be introduced to support trusted data intermediary organizations representing guilds or syndicates of information producers…(More)”.

Human rights centered global governance of quantum technologies: advancing information for all


UNESCO Brief: “The integration of quantum technologies into AI systems introduces greater complexity, requiring stronger policy and technical frameworks that uphold human rights protections. Ensuring that these advancements do not widen existing inequalities or cause environmental harm is crucial.

The  Brief  expands  on  the  “Quantum  technologies  and  their  global  impact:  discussion  paper ”published by UNESCO. The objective of this Brief is to unpack the multiple dimensions of the quantum ecosystem and broadly explore the human rights and policy implications of quantum technologies, with some key findings:

  • While quantum technologies promise advancements of human rights in the areas of encryption, privacy, and security,  they also pose risks to these very domains and related ones such as freedom of expression and access to information
  • Quantum  innovations  will  reshape security,  economic  growth,  and  science, but  without  a robust human  rights-based  framework,  they  risk  deepening  inequalities  and  destabilizing global governance.
  • The quantum  divide  is  emerging  as  a  critical  issue,  with  disparities  in  access  to  technology,  expertise, and infrastructure widening global inequalities. Unchecked, this gap could limit the benefits of quantum advancements for all.
  • The quantum gender divide remains stark—79% of quantum companies have no female senior leaders, and only 1 in 54 quantum job applicants are women.

The Issue Brief provides broad recommendations and targeted actions for stakeholders,emphasizing

human  rights-centered  governance,  awareness,  capacity  building,  and  inclusivity  to  bridge global and gender divides. The key recommendations focus on a comprehensive governance model which must  ensure  a  multistakeholder  approach  that  facilitates,  state  duties,  corporate  accountability, effective remedies for human rights violations, and open standards for equitable access. Prioritizing human  rights  in  global  governance  will  ensure  quantum  innovation  serves  all  of  humanity  while safeguarding fundamental freedoms…(More)”.

Some signs of AI model collapse begin to reveal themselves


Article by Steven J. Vaughan-Nichols: “I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google.

Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I’ve noticed that AI-enabled search, too, has been getting crappier.

In particular, I’m finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission’s (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they’re never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get… interesting,

This isn’t just Perplexity. I’ve done the exact same searches on all the major AI search bots, and they all give me “questionable” results.

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and “irreversible defects” in performance. The final result? A Nature 2024 paper stated, “The model becomes poisoned with its own projection of reality.”

Model collapse is the result of three different factors. The first is error accumulation, in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns. Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred. Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations…(More)”.

Project Push creates an archive of news alerts from around the world


Article by Neel Dhanesha: “A little over a year ago, Matt Taylor began to feel like he was getting a few too many push notifications from the BBC News app.

It’s a feeling many of us can probably relate to. Many people, myself included, have turned off news notifications entirely in the past few months. Taylor, however, went in the opposite direction.

Instead of turning off notifications, he decided to see how the BBC — the most popular news app in the U.K., where Taylor lives —  compared to other news organizations around the world. So he dug out an old Google Pixel phone, downloaded 61 news apps onto it, and signed up for push notifications on all of them.

As notifications roll in, a custom-built script (made with the help of ChatGPT) uploads their text to a server and a Bluesky page, providing a near real-time view of push notifications from services around the world. Taylor calls it Project Push.

People who work in news “take the front page very seriously,” said Taylor, a product manager at the Financial Times who built Project Push in his spare time. “There are lots of editors who care a lot about that, but actually one of the most important people in the newsroom is the person who decides that they’re going to press a button that sends an immediate notification to millions of people’s phones.”

The Project Push feed is a fascinating portrait of the news today. There are the expected alerts — breaking news, updates to ongoing stories like the wars in Gaza and Ukraine, the latest shenanigans in Washington — but also:

— Updates on infrastructure plans that, without the context, become absolutely baffling (a train will instead be a bus?).

— Naked attempts to increase engagement.

— Culture updates that some may argue aren’t deserving of a push alert from the Associated Press.

— Whatever this is.

Taylor tells me he’s noticed some geographic differences in how news outlets approach push notifications. Publishers based in Asia and the Middle East, for example, send far more notifications than European or American ones; CNN Indonesia alone pushed about 17,000 of the 160,000 or so notifications Project Push has logged over the past year…(More)”.

Digital Democracy in a Divided Global Landscape


10 essays by the Carnegie Endowment for International Peace: “A first set of essays analyzes how local actors are navigating the new tech landscape. Lillian Nalwoga explores the challenges and upsides of Starlink satellite internet deployment in Africa, highlighting legal hurdles, security risks, and concerns about the platform’s leadership. As African nations look to Starlink as a valuable tool in closing the digital divide, Nalwoga emphasizes the need to invest in strong regulatory frameworks to safeguard digital spaces. Jonathan Corpus Ong and Dean Jackson analyze the landscape of counter-disinformation funding in local contexts. They argue that there is a “mismatch” between the priorities of funders and the strategies that activists would like to pursue, resulting in “ineffective and extractive workflows.” Ong and Jackson isolate several avenues for structural change, including developing “big tent” coalitions of activists and strategies for localizing aid projects. Janjira Sombatpoonsiri examines the role of local actors in foreign influence operations in Southeast Asia. She highlights three motivating factors that drive local participation in these operations: financial benefits, the potential to gain an edge in domestic power struggles, and the appeal of anti-Western narratives.

A second set of essays explores evolving applications of digital repression…

A third set focuses on national strategies and digital sovereignty debates…

A fourth set explores pressing tech policy and regulatory questions…(More)”.

Amplifying Human Creativity and Problem Solving with AI Through Generative Collective Intelligence


Paper by Thomas P. Kehler, Scott E. Page, Alex Pentland, Martin Reeves and John Seely Brown: “We propose a new framework for human-AI collaboration that amplifies the distinct capabilities
of both. This framework, which we call Generative Collective Intelligence (GCI), shifts AI to the
group/social level and employs AI in dual roles: as interactive agents and as technology that
accumulates, organizes, and leverages knowledge. By creating a cognitive bridge between
human reasoning and AI models, GCI can overcome limitations of purely algorithmic
approaches to problem-solving and decision-making. The framework demonstrates how AI can
be reframed as a social and cultural technology that enables groups to solve complex problems
through structured collaboration that transcends traditional communication barriers. We describe
the mathematical foundations of GCI based on comparative judgment and minimum regret
principles, and illustrate its applications across domains including climate adaptation, healthcare
transformation, and civic participation. By combining human creativity with AI’s computational
capabilities, GCI offers a promising approach to addressing complex societal challenges that
neither human or machines can solve alone…(More)”.

Leveraging Citizen Data to Improve Public Services and Measure Progress Toward Sustainable Development Goal 16


Paper by Dilek Fraisl: “This paper presents the results of a pilot study conducted in Ghana that utilized citizen data approaches for monitoring a governance indicator within the SDG framework, focusing on indicator 16.6.2 citizen satisfaction with public services. This indicator is a crucial measure of governance quality, as emphasized by the UN Sustainable Development Goals (SDGs) through target 16.6 Develop effective, accountable, and transparent institutions at all levels. Indicator 16.6.2 specifically measures satisfaction with key public services, including health, education, and other government services, such as government-issued identification documents through a survey. However, with only 5 years remaining to achieve the SDGs, the lack of data continues to pose a significant challenge in monitoring progress toward this target, particularly regarding the experiences of marginalized populations. Our findings suggest that well-designed citizen data initiatives can effectively capture the experiences of marginalized individuals and communities. Additionally, they can serve as valuable supplements to official statistics, providing crucial data on population groups typically underrepresented in traditional surveys…(More)”.

Ethical implications related to processing of personal data and artificial intelligence in humanitarian crises: a scoping review


Paper by Tino Kreutzer et al: “Humanitarian organizations are rapidly expanding their use of data in the pursuit of operational gains in effectiveness and efficiency. Ethical risks, particularly from artificial intelligence (AI) data processing, are increasingly recognized yet inadequately addressed by current humanitarian data protection guidelines. This study reports on a scoping review that maps the range of ethical issues that have been raised in the academic literature regarding data processing of people affected by humanitarian crises….

We identified 16,200 unique records and retained 218 relevant studies. Nearly one in three (n = 66) discussed technologies related to AI. Seventeen studies included an author from a lower-middle income country while four included an author from a low-income country. We identified 22 ethical issues which were then grouped along the four ethical value categories of autonomy, beneficence, non-maleficence, and justice. Slightly over half of included studies (n = 113) identified ethical issues based on real-world examples. The most-cited ethical issue (n = 134) was a concern for privacy in cases where personal or sensitive data might be inadvertently shared with third parties. Aside from AI, the technologies most frequently discussed in these studies included social media, crowdsourcing, and mapping tools.

Studies highlight significant concerns that data processing in humanitarian contexts can cause additional harm, may not provide direct benefits, may limit affected populations’ autonomy, and can lead to the unfair distribution of scarce resources. The increase in AI tool deployment for humanitarian assistance amplifies these concerns. Urgent development of specific, comprehensive guidelines, training, and auditing methods is required to address these ethical challenges. Moreover, empirical research from low and middle-income countries, disproportionally affected by humanitarian crises, is vital to ensure inclusive and diverse perspectives. This research should focus on the ethical implications of both emerging AI systems, as well as established humanitarian data management practices…(More)”.

Engagement Integrity: Ensuring Legitimacy at a time of AI-Augmented Participation


Article by Stefaan G. Verhulst: “As participatory practices are increasingly tech-enabled, ensuring engagement integrity is becoming more urgent. While considerable scholarly and policy attention has been paid to information integrity (OECD, 2024; Gillwald et al., 2024; Wardle & Derakhshan, 2017; Ghosh & Scott, 2018), including concerns about disinformation, misinformation, and computational propaganda, the integrity of engagement itself — how to ensure collective decision-making is not tech manipulated — remains comparatively under-theorized and under-protected. I define engagement integrity as the procedural fairness and resistance to manipulation of tech-enabled deliberative and participatory processes.

My definition is different from prior discussions of engagement integrity, which mainly emphasized ethical standards when scientists engage with the public (e.g., in advisory roles, communication, or co-research). The concept is particularly salient in light of recent innovations that aim to lower the transaction costs of engagement using artificial intelligence (AI) (Verhulst, 2018). From AI-facilitated citizen assemblies (Simon et al., 2023) to natural language processing (NLP) -enhanced policy proposal platforms (Grobbink & Peach, 2020) to automated analysis of unstructured direct democracy proposals (Grobbink & Peach, 2020) to large-scale deliberative polls augmented with agentic AI (Mulgan, 2022), these developments promise to enhance inclusion, scalability, and sense-making. However, they also create new attack surfaces and vectors of influence that could undermine legitimacy.

This concern is not speculative…(More)”.