Article by By Hoppe, Travis et al : “Generative Artificial Intelligence (AI) is redefining how people interact with public information and shaping how public data are consumed. Recent advances in large language models (LLMs) mean that more Americans are getting answers from AI chatbots and other AI systems, which increasingly draw on public datasets. The federal statistical community can take action to advance the use of federal statistics with generative AI to ensure that official statistics are front-and-center, powering these AIdriven experiences.
The Federal Committee on Statistical Methodology (FCSM) developed the Framework for Data Quality to help analysts and the public assess fitness for use of data sets. AI-based queries present new challenges, and the framework should be enhanced to meet them. Generative AI acts as an intermediary in the consumption of public statistical information, extracting and combining data with logical strategies that differ from the thought processes and judgments of analysts. For statistical data to be accurately represented and trustworthy, they need to be machine understandable and be able to support models that measure data quality and provide contextual information.
FCSM is working to ensure that federal statistics used in these AI-driven interactions meet the data quality dimensions of the Framework including, but not limited to, accessibility, timeliness, accuracy, and credibility. We propose a new collaborative federal effort to establish best practices for optimizing APIs, metadata, and data accessibility to support accurate and trusted generative AI results…(More)”.
The path for AI in poor nations does not need to be paved with billions
Editorial in Nature: “Coinciding with US President Donald Trump’s tour of Gulf states last week, Saudi Arabia announced that it is embarking on a large-scale artificial intelligence (AI) initiative. The proposed venture will have state backing and considerable involvement from US technology firms. It is the latest move in a global expansion of AI ambitions beyond the existing heartlands of the United States, China and Europe. However, as Nature India, Nature Africa and Nature Middle East report in a series of articles on AI in low- and middle-income countries (LMICs) published on 21 May (see go.nature.com/45jy3qq), the path to home-grown AI doesn’t need to be paved with billions, or even hundreds of millions, of dollars, or depend exclusively on partners in Western nations or China…, as a News Feature that appears in the series makes plain (see go.nature.com/3yrd3u2), many initiatives in LMICs aren’t focusing on scaling up, but on ‘scaling right’. They are “building models that work for local users, in their languages, and within their social and economic realities”.
More such local initiatives are needed. Some of the most popular AI applications, such as OpenAI’s ChatGPT and Google Gemini, are trained mainly on data in European languages. That would mean that the model is less effective for users who speak Hindi, Arabic, Swahili, Xhosa and countless other languages. Countries are boosting home-grown apps by funding start-up companies, establishing AI education programmes, building AI research and regulatory capacity and through public engagement.
Those LMICs that have started investing in AI began by establishing an AI strategy, including policies for AI research. However, as things stand, most of the 55 member states of the African Union and of the 22 members of the League of Arab States have not produced an AI strategy. That must change…(More)”.
Making Civic Trust Less Abstract: A Framework for Measuring Trust Within Cities
Report by Stefaan Verhulst, Andrew J. Zahuranec, and Oscar Romero: “Trust is foundational to effective governance, yet its inherently abstract nature has made it difficult to measure and operationalize, especially in urban contexts. This report proposes a practical framework for city officials to diagnose and strengthen civic trust through observable indicators and actionable interventions.

Rather than attempting to quantify trust as an abstract concept, the framework distinguishes between the drivers of trust—direct experiences and institutional interventions—and its manifestations, both emotional and behavioral. Drawing on literature reviews, expert workshops, and field engagement with the New York City Civic Engagement Commission (CEC), we present a three-phase approach: (1) baseline assessment of trust indicators, (2) analysis of causal drivers, and (3) design and continuous evaluation of targeted interventions. The report illustrates the framework’s applicability through a hypothetical case involving the NYC Parks Department and a real-world case study of the citywide participatory budgeting initiative, The People’s Money. By providing a structured, context-sensitive, and iterative model for measuring civic trust, this report seeks to equip public institutions and city officials with a framework for meaningful measurement of civic trust…(More)“.
Silicon Valley Is at an Inflection Point
Article by Karen Hao: “…In the decade that I have observed Silicon Valley — first as an engineer, then as a journalist — I’ve watched the industry shift to a new paradigm. Tech companies have long reaped the benefits of a friendly U.S. government, but the Trump administration has made clear that it will now grant new firepower to the industry’s ambitions. The Stargate announcement was just one signal. Another was the Republican tax bill that the House passed last week, which would prohibit states from regulating A.I. for the next 10 years.
The leading A.I. giants are no longer merely multinational corporations; they are growing into modern-day empires. With the full support of the federal government, soon they will be able to reshape most spheres of society as they please, from the political to the economic to the production of science…(More)”.
Surveillance pricing: How your data determines what you pay
Article by Douglas Crawford: “Surveillance pricing, also known as personalized or algorithmic pricing, is a practice where companies use your personal data, such as your location, the device you’re using, your browsing history, and even your income, to determine what price to show you. It’s not just about supply and demand — it’s about you as a consumer and how much the system thinks you’re able (or willing) to pay.
Have you ever shopped online for a flight(new window), only to find that the price mysteriously increased the second time you checked? Or have you and a friend searched for the same hotel room on your phones, only to find your friend sees a lower price? This isn’t a glitch — it’s surveillance pricing at work.
In the United States, surveillance pricing is becoming increasingly prevalent across various industries, including airlines, hotels, and e-commerce platforms. It exists elsewhere, but in other parts of the world, such as the European Union, there is a growing recognition of the danger this pricing model presents to citizens’ privacy, resulting in stricter data protection laws aimed at curbing it. The US appears to be moving in the opposite direction…(More)”.
In a world first, Brazilians will soon be able to sell their digital data
Article by Gabriel Daros: “Last month, Brazil announced it is rolling out a data ownership pilot that will allow its citizens to manage, own, and profit from their digital footprint — the first such nationwide initiative in the world.
The project is administered by Dataprev, a state-owned company that provides technological solutions for the government’s social programs. Dataprev is partnering with DrumWave, a California-based data valuation and monetization firm.
Today, “people get nothing from the data they share,” Brittany Kaiser, co-founder of the Own Your Data Foundation and board adviser for DrumWave, told Rest of World. “Brazil has decided its citizens should have ownership rights over their data.”
In monetizing users’ data, Brazil is ahead of the U.S., where a 2019 “data dividend” initiative by California Governor Gavin Newsom never took off. The city of Chicago successfully monetizes government data including transportation and education. If implemented, Brazil’s will be the first public-private partnership that allows citizens, rather than companies, to get a share of the global data market, currently valued at $4 billion and expected to grow to over $40 billion by 2034.
The pilot involves a small group of Brazilians who will use data wallets for payroll loans. When users apply for a new loan, the data in the contract will be collected in the data wallets, which companies will be able to bid on. Users will have the option to opt out. It works much like third-party cookies, but instead of simply accepting or declining, people can choose to make money…(More)”.
Project Push creates an archive of news alerts from around the world
Article by Neel Dhanesha: “A little over a year ago, Matt Taylor began to feel like he was getting a few too many push notifications from the BBC News app.
It’s a feeling many of us can probably relate to. Many people, myself included, have turned off news notifications entirely in the past few months. Taylor, however, went in the opposite direction.
Instead of turning off notifications, he decided to see how the BBC — the most popular news app in the U.K., where Taylor lives — compared to other news organizations around the world. So he dug out an old Google Pixel phone, downloaded 61 news apps onto it, and signed up for push notifications on all of them.
As notifications roll in, a custom-built script (made with the help of ChatGPT) uploads their text to a server and a Bluesky page, providing a near real-time view of push notifications from services around the world. Taylor calls it Project Push.
People who work in news “take the front page very seriously,” said Taylor, a product manager at the Financial Times who built Project Push in his spare time. “There are lots of editors who care a lot about that, but actually one of the most important people in the newsroom is the person who decides that they’re going to press a button that sends an immediate notification to millions of people’s phones.”
The Project Push feed is a fascinating portrait of the news today. There are the expected alerts — breaking news, updates to ongoing stories like the wars in Gaza and Ukraine, the latest shenanigans in Washington — but also:
— Updates on infrastructure plans that, without the context, become absolutely baffling (a train will instead be a bus?).
— Naked attempts to increase engagement.
— Culture updates that some may argue aren’t deserving of a push alert from the Associated Press.
— Whatever this is.
Taylor tells me he’s noticed some geographic differences in how news outlets approach push notifications. Publishers based in Asia and the Middle East, for example, send far more notifications than European or American ones; CNN Indonesia alone pushed about 17,000 of the 160,000 or so notifications Project Push has logged over the past year…(More)”.
Trump Taps Palantir to Compile Data on Americans
Article by Sheera Frenkel and Aaron Krolik: “In March, President Trump signed an executive order calling for the federal government to share data across agencies, raising questions over whether he might compile a master list of personal information on Americans that could give him untold surveillance power.
Mr. Trump has not publicly talked about the effort since. But behind the scenes, officials have quietly put technological building blocks into place to enable his plan. In particular, they have turned to one company: Palantir, the data analysis and technology firm.
The Trump administration has expanded Palantir’s work across the federal government in recent months. The company has received more than $113 million in federal government spending since Mr. Trump took office, according to public records, including additional funds from existing contracts as well as new contracts with the Department of Homeland Security and the Pentagon. (This does not include a $795 million contract that the Department of Defense awarded the company last week, which has not been spent.)
Representatives of Palantir are also speaking to at least two other agencies — the Social Security Administration and the Internal Revenue Service — about buying its technology, according to six government officials and Palantir employees with knowledge of the discussions.
The push has put a key Palantir product called Foundry into at least four federal agencies, including D.H.S. and the Health and Human Services Department. Widely adopting Foundry, which organizes and analyzes data, paves the way for Mr. Trump to easily merge information from different agencies, the government officials said…(More)“
Creating detailed portraits of Americans based on government data is not just a pipe dream. The Trump administration has already sought access to hundreds of data points on citizens and others through government databases, including their bank account numbers, the amount of their student debt, their medical claims and any disability status…(More)”.
How Canada Needs to Respond to the US Data Crisis
Article by Danielle Goldfarb: “The United States is cutting and undermining official US data across a wide range of domains, eroding the foundations of evidence-based policy making. This is happening mostly under the radar here in Canada, buried by news about US President Donald Trump’s barrage of tariffs and many other alarming actions. Doing nothing in response means Canada accepts blind spots in critical areas. Instead, this country should respond by investing in essential data and building the next generation of trusted public intelligence.
The United States has cut or altered more than 2,000 official data sets across the science, health, climate and development sectors, according to the National Security Archive. Deep staff cuts across all program areas effectively cancel or deeply erode many other statistical programs….
Even before this data purge, official US data methods were becoming less relevant and reliable. Traditional government surveys lag by weeks or months and face declining participation. This lag proved particularly problematic during the COVID-19 pandemic and also now, when economic data with a one- or two-month lag is largely irrelevant for tracking the real-time impact of constantly shifting Trump tariffs….
With deep ties to the United States, Canada needs to take action to reduce these critical blind spots. This challenge brings a major strength into the picture: Canada’s statistical agencies have strong reputations as trusted, transparent information sources.
First, Canada should strengthen its data infrastructure. Official Canadian data suffers from similar delays and declining response rates as in the United States. Statistics Canada needs a renewed mandate and stable resources to produce policy-relevant indicators, especially in a timelier way, and in areas where US data has been cut or compromised.
Second, Canada could also act as a trusted place to store vulnerable indicators — inventorying missing data sets, archiving those at risk and coordinating global efforts to reconstruct essential metrics.
Third, Canada has an opportunity to lead in shaping the next generation of trusted and better public-interest intelligence…(More)”.
The Teacher in the Machine: A Human History of Education Technology
Book by Anne Trumbore: “From AI tutors who ensure individualized instruction but cannot do math to free online courses from elite universities that were supposed to democratize higher education, claims that technological innovations will transform education often fall short. Yet, as Anne Trumbore shows in The Teacher in the Machine, the promises of today’s cutting-edge technologies aren’t new. Long before the excitement about the disruptive potential of generative AI–powered tutors and massive open online courses, scholars at Stanford, MIT, and the University of Illinois in the 1960s and 1970s were encouraged by the US government to experiment with computers and artificial intelligence in education. Trumbore argues that the contrast between these two eras of educational technology reveals the changing role of higher education in the United States as it shifted from a public good to a private investment.
Writing from a unique insider’s perspective and drawing on interviews with key figures, historical research, and case studies, Trumbore traces today’s disparate discussions about generative AI, student loan debt, and declining social trust in higher education back to their common origins at a handful of elite universities fifty years ago. Arguing that those early educational experiments have resonance today, Trumbore points the way to a more equitable and collaborative pedagogical future. Her account offers a critical lens on the history of technology in education just as universities and students seek a stronger hand in shaping the future of their institutions…(More)”