Setting the Standard: Statistical Agencies’ Unique Role in Building Trustworthy AI


Article by Corinna Turbes: “As our national statistical agencies grapple with new challenges posed by artificial intelligence (AI), many agencies face intense pressure to embrace generative AI as a way to reach new audiences and demonstrate technological relevance. However, the rush to implement generative AI applications risks undermining these agencies’ fundamental role as authoritative data sources. Statistical agencies’ foundational mission—producing and disseminating high-quality, authoritative statistical information—requires a more measured approach to AI adoption.

Statistical agencies occupy a unique and vital position in our data ecosystem, entrusted with creating the reliable statistics that form the backbone of policy decisions, economic planning, and social research. The work of these agencies demands exceptional precision, transparency, and methodological rigor. Implementation of generative AI interfaces, while technologically impressive, could inadvertently compromise the very trust and accuracy that make these agencies indispensable.

While public-facing interfaces play a valuable role in democratizing access to statistical information, statistical agencies need not—and often should not—rely on generative AI to be effective in that effort. For statistical agencies, an extractive AI approach – which retrieves and presents existing information from verified databases rather than generating new content – offers a more appropriate path forward. By pulling from verified, structured datasets and providing precise, accurate responses, extractive AI systems can maintain the high standards of accuracy required while making statistical information more accessible to users who may find traditional databases overwhelming. An extractive, rather than generative,  approach allows agencies to modernize data delivery while preserving their core mission of providing reliable, verifiable statistical information…(More)”

Bad data costs Americans trillions. Let’s fix it with a renewed data strategy


Article by Nick Hart & Suzette Kent: “Over the past five years, the federal government lost $200-to-$500 billion per year in fraud to improper payments — that’s up to $3,000 taken from every working American’s pocket annually. Since 2003, these preventable losses have totaled an astounding $2.7 trillion. But here’s the good news: We already have the data and technology to greatly eliminate this waste in the years ahead. The operational structure and legal authority to put these tools to work protecting taxpayer dollars needs to be refreshed and prioritized.

The challenge is straightforward: Government agencies often can’t effectively share and verify basic information before sending payments. For example, federal agencies may not be able to easily check if someone is deceased, verify income or detect duplicate payments across programs…(More)”.

Data for Better Governance: Building Government Analytics Ecosystems in Latin America and the Caribbean


Report by the Worldbank: “Governments in Latin America and the Caribbean face significant development challenges, including insufficient economic growth, inflation, and institutional weaknesses. Overcoming these issues requires identifying systemic obstacles through data-driven diagnostics and equipping public officials with the skills to implement effective solutions.

Although public administrations in the region often have access to valuable data, they frequently fall short in analyzing it to inform decisions. However, the impact is big. Inefficiencies in procurement, misdirected transfers, and poorly managed human resources result in an estimated waste of 4% of GDP, equivalent to 17% of all public spending. 

The report “Data for Better Governance: Building Government Analytical Ecosystems in Latin America and the Caribbean” outlines a roadmap for developing government analytics, focusing on key enablers such as data infrastructure and analytical capacity, and offers actionable strategies for improvement…(More)”.

AI, huge hacks leave consumers facing a perfect storm of privacy perils


Article by Joseph Menn: “Hackers are using artificial intelligence to mine unprecedented troves of personal information dumped online in the past year, along with unregulated commercial databases, to trick American consumers and even sophisticated professionals into giving up control of bank and corporate accounts.

Armed with sensitive health informationcalling records and hundreds of millions of Social Security numbers, criminals and operatives of countries hostile to the United States are crafting emails, voice calls and texts that purport to come from government officials, co-workers or relatives needing help, or familiar financial organizations trying to protect accounts instead of draining them.

“There is so much data out there that can be used for phishing and password resets that it has reduced overall security for everyone, and artificial intelligence has made it much easier to weaponize,” said Ashkan Soltani, executive director of the California Privacy Protection Agency, the only such state-level agency.

The losses reported to the FBI’s Internet Crime Complaint Center nearly tripled from 2020 to 2023, to $12.5 billion, and a number of sensitive breaches this year have only increased internet insecurity. The recently discovered Chinese government hacks of U.S. telecommunications companies AT&T, Verizon and others, for instance, were deemed so serious that government officials are being told not to discuss sensitive matters on the phone, some of those officials said in interviews. A Russian ransomware gang’s breach of Change Healthcare in February captured data on millions of Americans’ medical conditions and treatments, and in August, a small data broker, National Public Data, acknowledged that it had lost control of hundreds of millions of Social Security numbers and addresses now being sold by hackers.

Meanwhile, the capabilities of artificial intelligence are expanding at breakneck speed. “The risks of a growing surveillance industry are only heightened by AI and other forms of predictive decision-making, which are fueled by the vast datasets that data brokers compile,” U.S. Consumer Financial Protection Bureau Director Rohit Chopra said in September…(More)”.

Scientists Scramble to Save Climate Data from Trump—Again


Article by Chelsea Harvey: “Eight years ago, as the Trump administration was getting ready to take office for the first time, mathematician John Baez was making his own preparations.

Together with a small group of friends and colleagues, he was arranging to download large quantities of public climate data from federal websites in order to safely store them away. Then-President-elect Donald Trump had repeatedly denied the basic science of climate change and had begun nominating climate skeptics for cabinet posts. Baez, a professor at the University of California, Riverside, was worried the information — everything from satellite data on global temperatures to ocean measurements of sea-level rise — might soon be destroyed.

His effort, known as the Azimuth Climate Data Backup Project, archived at least 30 terabytes of federal climate data by the end of 2017.

In the end, it was an overprecaution.

The first Trump administration altered or deleted numerous federal web pages containing public-facing climate information, according to monitoring efforts by the nonprofit Environmental Data and Governance Initiative (EDGI), which tracks changes on federal websites. But federal databases, containing vast stores of globally valuable climate information, remained largely intact through the end of Trump’s first term.

Yet as Trump prepares to take office again, scientists are growing more worried.

Federal datasets may be in bigger trouble this time than they were under the first Trump administration, they say. And they’re preparing to begin their archiving efforts anew.

“This time around we expect them to be much more strategic,” said Gretchen Gehrke, EDGI’s website monitoring program lead. “My guess is that they’ve learned their lessons.”

The Trump transition team didn’t respond to a request for comment.

Like Baez’s Azimuth project, EDGI was born in 2016 in response to Trump’s first election. They weren’t the only ones…(More)”.

AI Investment Potential Index: Mapping Global Opportunities for Sustainable Development


Paper by AFD: “…examines the potential of artificial intelligence (AI) investment to drive sustainable development across diverse national contexts. By evaluating critical factors, including AI readiness, social inclusion, human capital, and macroeconomic conditions, we construct a nuanced and comprehensive analysis of the global AI landscape. Employing advanced statistical techniques and machine learning algorithms, we identify nations with significant untapped potential for AI investment.
We introduce the AI Investment Potential Index (AIIPI), a novel instrument designed to guide financial institutions, development banks, and governments in making informed, strategic AI investment decisions. The AIIPI synthesizes metrics of AI readiness with socio-economic indicators to identify and highlight opportunities for fostering inclusive and sustainable growth. The methodological novelty lies in the weight selection process, which combines statistical modeling and also an entropy-based weighting approach. Furthermore, we provide detailed policy implications to support stakeholders in making targeted investments aimed at reducing disparities and advancing equitable technological development…(More)”.

Shifting Patterns of Social Interaction: Exploring the Social Life of Urban Spaces Through A.I.


Paper by Arianna Salazar-Miranda, et al: “We analyze changes in pedestrian behavior over a 30-year period in four urban public spaces located in New York, Boston, and Philadelphia. Building on William Whyte’s observational work from 1980, where he manually recorded pedestrian behaviors, we employ computer vision and deep learning techniques to examine video footage from 1979-80 and 2008-10. Our analysis measures changes in walking speed, lingering behavior, group sizes, and group formation. We find that the average walking speed has increased by 15%, while the time spent lingering in these spaces has halved across all locations. Although the percentage of pedestrians walking alone remained relatively stable (from 67% to 68%), the frequency of group encounters declined, indicating fewer interactions in public spaces. This shift suggests that urban residents increasingly view streets as thoroughfares rather than as social spaces, which has important implications for the role of public spaces in fostering social engagement…(More)”.

Courts in Buenos Aires are using ChatGPT to draft rulings


Article by Victoria Mendizabal: “In May, the Public Prosecution Service of the City of Buenos Aires began using generative AI to predict rulings for some public employment cases related to salary demands.

Since then, justice employees at the office for contentious administrative and tax matters of the city of Buenos Aires have uploaded case documents into ChatGPT, which analyzes patterns, offers a preliminary classification from a catalog of templates, and drafts a decision. So far, ChatGPT has been used for 20 legal sentences.

The use of generative AI has cut down the time it takes to draft a sentence from an hour to about 10 minutes, according to recent studies conducted by the office.

“We, as professionals, are not the main characters anymore. We have become editors,” Juan Corvalán, deputy attorney general in contentious administrative and tax matters, told Rest of World.

The introduction of generative AI tools has improved efficiency at the office, but it has also prompted concerns within the judiciary and among independent legal experts about possiblebiases, the treatment of personal data, and the emergence of hallucinations. Similar concerns have echoed beyond Argentina’s borders.

“We, as professionals, are not the main characters anymore. We have become editors.”

“Any inconsistent use, such as sharing sensitive information, could have a considerable legal cost,” Lucas Barreiro, a lawyer specializing in personal data protection and a member of Privaia, a civil association dedicated to the defense of human rights in the digital era, told Rest of World.

Judges in the U.S. have voiced skepticism about the use of generative AI in the courts, with Manhattan Federal Judge Edgardo Ramos saying earlier this year that “ChatGPT has been shown to be an unreliable resource.” In Colombia and the Netherlands, the use of ChatGPT by judges was criticized by local experts. But not everyone is concerned: A court of appeals judge in the U.K. who used ChatGPT to write part of a judgment said that it was “jolly useful.”

For Corvalán, the move to generative AI is the culmination of a years-long transformation within the City of Buenos Aires’ attorney general’s office.In 2017, Corvalán put together a group of developers to train an AI-powered system called PROMETEA, which was intended to automate judicial tasks and expedite case proceedings. The team used more than 300,000 rulings and case files related to housing protection, public employment bonuses, enforcement of unpaid fines, and denial of cab licenses to individuals with criminal records…(More)”.

Artificial Intelligence and the Future of Work


Report by the National Academies: “AI technology is at an inflection point: a surge of technological progress has driven the rapid development and adoption of generative AI systems, such as ChatGPT, which are capable of generating text, images, or other content based on user requests.

This technical progress is likely to continue in coming years, with the potential to complement or replace human labor in certain tasks and reshape job markets. However, it is difficult to predict exactly which new AI capabilities might emerge, and when these advances might occur.

This National Academies’ report evaluates recent advances in AI technology and their implications for economic productivity, job stability, and income inequality, identifying research opportunities and data needs to equip workers and policymakers to flexibly respond to AI developments…(More)”

Congress should designate an entity to oversee data security, GAO says


Article by Matt Bracken: “Federal agencies may need to rethink how they handle individuals’ personal data to protect their civil rights and civil liberties, a congressional watchdog said in a new report Tuesday.

Without federal guidance governing the protection of the public’s civil rights and liberties, agencies have pursued a patchwork system of policies tied to the collection, sharing and use of data, the Government Accountability Office said

To address that problem head-on, the GAO is recommending that Congress select “an appropriate federal entity” to produce guidance or regulations regarding data protection that would apply to all agencies, giving that entity “the explicit authority to make needed technical and policy choices or explicitly stating Congress’s own choices.”

That recommendation was formed after the GAO sent a questionnaire to all 24 Chief Financial Officers Act agencies asking for information about their use of emerging technologies and data capabilities and how they’re guaranteeing that personally identifiable information is safeguarded.

The GAO found that 16 of those CFO Act agencies have policies or procedures in place to protect civil rights and civil liberties with regard to data use, while the other eight have not taken steps to do the same.

The most commonly cited issues for agencies in their efforts to protect the civil rights and civil liberties of the public were “complexities in handling protections associated with new and emerging technologies” and “a lack of qualified staff possessing needed skills in civil rights, civil liberties, and emerging technologies.”

“Further, eight of the 24 agencies believed that additional government-wide law or guidance would strengthen consistency in addressing civil rights and civil liberties protections,” the GAO wrote. “One agency noted that such guidance could eliminate the hodge-podge approach to the governance of data and technology.”

All 24 CFO Act agencies have internal offices to “handle the protection of the public’s civil rights as identified in federal laws,” with much of that work centered on the handling of civil rights violations and related complaints. Four agencies — the departments of Defense, Homeland Security, Justice and Education — have offices to specifically manage civil liberty protections across their entire agencies. The other 20 agencies have mostly adopted a “decentralized approach to protecting civil liberties, including when collecting, sharing, and using data,” the GAO noted…(More)”.