Bad data costs Americans trillions. Let’s fix it with a renewed data strategy


Article by Nick Hart & Suzette Kent: “Over the past five years, the federal government lost $200-to-$500 billion per year in fraud to improper payments — that’s up to $3,000 taken from every working American’s pocket annually. Since 2003, these preventable losses have totaled an astounding $2.7 trillion. But here’s the good news: We already have the data and technology to greatly eliminate this waste in the years ahead. The operational structure and legal authority to put these tools to work protecting taxpayer dollars needs to be refreshed and prioritized.

The challenge is straightforward: Government agencies often can’t effectively share and verify basic information before sending payments. For example, federal agencies may not be able to easily check if someone is deceased, verify income or detect duplicate payments across programs…(More)”.

The British state is blind


The Economist: “Britiain is a bit bigger than it thought. In 2023 net migration stood at 906,000 people, rather more than the 740,000 previously estimated, according to the Office for National Statistics. It is equivalent to discovering an extra Slough. New numbers for 2022 also arrived. At first the ONS thought net migration stood at 606,000. Now it reckons the figure was 872,000, a difference roughly the size of Stoke-on-Trent, a small English city.

If statistics enable the state to see, then the British government is increasingly short-sighted. Fundamental questions, such as how many people arrive each year, are now tricky to answer. How many people are in work? The answer is fuzzy. Just how big is the backlog of court cases? The Ministry of Justice will not say, because it does not know. Britain is a blind state.

This causes all sorts of problems. The Labour Force Survey, once a gold standard of data collection, now struggles to provide basic figures. At one point the Resolution Foundation, an economic think-tank, reckoned the ONS had underestimated the number of workers by almost 1m since 2019. Even after the ONS rejigged its tally on December 3rd, the discrepancy is still perhaps 500,000, Resolution reckons. Things are so bad that Andrew Bailey, the governor of the Bank of England, makes jokes about the inaccuracy of Britain’s job-market stats in after-dinner speeches—akin to a pilot bursting out of the cockpit mid-flight and asking to borrow a compass, with a chuckle.

Sometimes the sums in question are vast. When the Department for Work and Pensions put out a new survey on household income in the spring, it was missing about £40bn ($51bn) of benefit income, roughly 1.5% of gdp or 13% of all welfare spending. This makes things like calculating the rate of child poverty much harder. Labour mps want this line to go down. Yet it has little idea where the line is to begin with.

Even small numbers are hard to count. Britain has a backlog of court cases. How big no one quite knows: the Ministry of Justice has not published any data on it since March. In the summer, concerned about reliability, it held back the numbers (which means the numbers it did publish are probably wrong, says the Institute for Government, another think-tank). And there is no way of tracking someone from charge to court to prison to probation. Justice is meant to be blind, but not to her own conduct…(More)”.

AI, huge hacks leave consumers facing a perfect storm of privacy perils


Article by Joseph Menn: “Hackers are using artificial intelligence to mine unprecedented troves of personal information dumped online in the past year, along with unregulated commercial databases, to trick American consumers and even sophisticated professionals into giving up control of bank and corporate accounts.

Armed with sensitive health informationcalling records and hundreds of millions of Social Security numbers, criminals and operatives of countries hostile to the United States are crafting emails, voice calls and texts that purport to come from government officials, co-workers or relatives needing help, or familiar financial organizations trying to protect accounts instead of draining them.

“There is so much data out there that can be used for phishing and password resets that it has reduced overall security for everyone, and artificial intelligence has made it much easier to weaponize,” said Ashkan Soltani, executive director of the California Privacy Protection Agency, the only such state-level agency.

The losses reported to the FBI’s Internet Crime Complaint Center nearly tripled from 2020 to 2023, to $12.5 billion, and a number of sensitive breaches this year have only increased internet insecurity. The recently discovered Chinese government hacks of U.S. telecommunications companies AT&T, Verizon and others, for instance, were deemed so serious that government officials are being told not to discuss sensitive matters on the phone, some of those officials said in interviews. A Russian ransomware gang’s breach of Change Healthcare in February captured data on millions of Americans’ medical conditions and treatments, and in August, a small data broker, National Public Data, acknowledged that it had lost control of hundreds of millions of Social Security numbers and addresses now being sold by hackers.

Meanwhile, the capabilities of artificial intelligence are expanding at breakneck speed. “The risks of a growing surveillance industry are only heightened by AI and other forms of predictive decision-making, which are fueled by the vast datasets that data brokers compile,” U.S. Consumer Financial Protection Bureau Director Rohit Chopra said in September…(More)”.

Scientists Scramble to Save Climate Data from Trump—Again


Article by Chelsea Harvey: “Eight years ago, as the Trump administration was getting ready to take office for the first time, mathematician John Baez was making his own preparations.

Together with a small group of friends and colleagues, he was arranging to download large quantities of public climate data from federal websites in order to safely store them away. Then-President-elect Donald Trump had repeatedly denied the basic science of climate change and had begun nominating climate skeptics for cabinet posts. Baez, a professor at the University of California, Riverside, was worried the information — everything from satellite data on global temperatures to ocean measurements of sea-level rise — might soon be destroyed.

His effort, known as the Azimuth Climate Data Backup Project, archived at least 30 terabytes of federal climate data by the end of 2017.

In the end, it was an overprecaution.

The first Trump administration altered or deleted numerous federal web pages containing public-facing climate information, according to monitoring efforts by the nonprofit Environmental Data and Governance Initiative (EDGI), which tracks changes on federal websites. But federal databases, containing vast stores of globally valuable climate information, remained largely intact through the end of Trump’s first term.

Yet as Trump prepares to take office again, scientists are growing more worried.

Federal datasets may be in bigger trouble this time than they were under the first Trump administration, they say. And they’re preparing to begin their archiving efforts anew.

“This time around we expect them to be much more strategic,” said Gretchen Gehrke, EDGI’s website monitoring program lead. “My guess is that they’ve learned their lessons.”

The Trump transition team didn’t respond to a request for comment.

Like Baez’s Azimuth project, EDGI was born in 2016 in response to Trump’s first election. They weren’t the only ones…(More)”.

AI adoption in the public sector


Two studies from the Joint Research Centre: “…delve into the factors that influence the adoption of Artificial Intelligence (AI) in public sector organisations.

first report analyses a survey conducted among 574 public managers across seven EU countries, identifying what are currently the main drivers of AI adoption and providing 3 key recommendations to practitioners. 

Strong expertise and various organisational factors emerge as key contributors for AI adoptions, and a second study sheds light on the essential competences and governance practices required for the effective adoption and usage of AI in the public sector across Europe…

The study finds that AI adoption is no longer a promise for public administration, but a reality, particularly in service delivery and internal operations and to a lesser extent in policy decision-making. It also highlights the importance of organisational factors such as leadership support, innovative culture, clear AI strategy, and in-house expertise in fostering AI adoption. Anticipated citizen needs are also identified as a key external factor driving AI adoption. 

Based on these findings, the report offers three policy recommendations. First, it suggests paying attention to AI and digitalisation in leadership programmes, organisational development and strategy building. Second, it recommends broadening in-house expertise on AI, which should include not only technical expertise, but also expertise on ethics, governance, and law. Third, the report advises monitoring (for instance through focus groups and surveys) and exchanging on citizen needs and levels of readiness for digital improvements in government service delivery…(More)”.

Access to data for research: lessons for the National Data Library from the front lines of AI innovation.


Report by the Minderoo Centre for Technology and Democracy and the Bennett Institute for Public Policy: “…a series of case studies on access to data for research. These case studies illustrate the barriers that researchers are grappling with, and suggest how a new wave of policy development could help address these.

Each show innovative uses of data for research in areas that are critically important to science and society, including:

The projects highlight crucial design considerations for the UK’s National Data Library and the need for a digital infrastructure that connects data, researchers, and resources that enable data use. By centring the experiences of researchers on the front-line of AI innovation, this report hopes to bring some of those barriers into focus and inform continued conversations in this area…(More)”.

Flood data platform governance: Identifying the technological and socio-technical approach(es) differences


Paper by Mahardika Fadmastuti, David Nowak, and Joep Crompvoets: “Data platform governance concept focuses on what decision must be made in order to reach the data platform mission and who makes that decision. The current study of the data platform governance framework is applied for the general platform ecosystem that values managing data as an organizational asset. However, flood data platforms are essential tools for enhancing the governance of flood risks and data platform governance in flood platforms is understudied. By adopting a data governance domains framework, this paper identifies the technological and socio-technical approach(es) differences in public value(s) of flood data platforms. Empirically, we analyze 2 cases of flood data platforms to contrast the differences. Utilizing a qualitative approach, we combined web-observations and interviews to collect the data. Regardless of its approach, integrating flood data platform technologies into government authorities’ routines requires organizational commitment that drives value creation. The key differences between these approaches lies in the way the government sectors see this flood data platform technology. Empirically, our case study shows that the technological approach values improving capabilities and performances of the public authority while the socio-technical approach focuses more importantly providing engagement value with the public users. We further explore the differences of these approaches by analyzing each component of decision domains in the data governance framework…(More)”

Courts in Buenos Aires are using ChatGPT to draft rulings


Article by Victoria Mendizabal: “In May, the Public Prosecution Service of the City of Buenos Aires began using generative AI to predict rulings for some public employment cases related to salary demands.

Since then, justice employees at the office for contentious administrative and tax matters of the city of Buenos Aires have uploaded case documents into ChatGPT, which analyzes patterns, offers a preliminary classification from a catalog of templates, and drafts a decision. So far, ChatGPT has been used for 20 legal sentences.

The use of generative AI has cut down the time it takes to draft a sentence from an hour to about 10 minutes, according to recent studies conducted by the office.

“We, as professionals, are not the main characters anymore. We have become editors,” Juan Corvalán, deputy attorney general in contentious administrative and tax matters, told Rest of World.

The introduction of generative AI tools has improved efficiency at the office, but it has also prompted concerns within the judiciary and among independent legal experts about possiblebiases, the treatment of personal data, and the emergence of hallucinations. Similar concerns have echoed beyond Argentina’s borders.

“We, as professionals, are not the main characters anymore. We have become editors.”

“Any inconsistent use, such as sharing sensitive information, could have a considerable legal cost,” Lucas Barreiro, a lawyer specializing in personal data protection and a member of Privaia, a civil association dedicated to the defense of human rights in the digital era, told Rest of World.

Judges in the U.S. have voiced skepticism about the use of generative AI in the courts, with Manhattan Federal Judge Edgardo Ramos saying earlier this year that “ChatGPT has been shown to be an unreliable resource.” In Colombia and the Netherlands, the use of ChatGPT by judges was criticized by local experts. But not everyone is concerned: A court of appeals judge in the U.K. who used ChatGPT to write part of a judgment said that it was “jolly useful.”

For Corvalán, the move to generative AI is the culmination of a years-long transformation within the City of Buenos Aires’ attorney general’s office.In 2017, Corvalán put together a group of developers to train an AI-powered system called PROMETEA, which was intended to automate judicial tasks and expedite case proceedings. The team used more than 300,000 rulings and case files related to housing protection, public employment bonuses, enforcement of unpaid fines, and denial of cab licenses to individuals with criminal records…(More)”.

Artificial Intelligence and the Future of Work


Report by the National Academies: “AI technology is at an inflection point: a surge of technological progress has driven the rapid development and adoption of generative AI systems, such as ChatGPT, which are capable of generating text, images, or other content based on user requests.

This technical progress is likely to continue in coming years, with the potential to complement or replace human labor in certain tasks and reshape job markets. However, it is difficult to predict exactly which new AI capabilities might emerge, and when these advances might occur.

This National Academies’ report evaluates recent advances in AI technology and their implications for economic productivity, job stability, and income inequality, identifying research opportunities and data needs to equip workers and policymakers to flexibly respond to AI developments…(More)”

What AI Can’t Do for Democracy


Essay by Daniel Berliner: “In short, there is increasing optimism among both theorists and practitioners over the potential for technology-enabled civic engagement to rejuvenate or deepen democracy. Is this optimism justified?

The answer depends on how we think about what civic engagement can do. Political representatives are often unresponsive to the preferences of ordinary people. Their misperceptions of public needs and preferences are partly to blame, but the sources of democratic dysfunction are much deeper and more structural than information alone. Working to ensure many more “citizens’ voices are truly heard” will thus do little to improve government responsiveness in contexts where the distribution of power means that policymakers have no incentive to do what citizens say. And as some critics have argued, it can even distract from recognizing and remedying other problems, creating a veneer of legitimacy—what health policy expert Sherry Arnstein once famously derided as mere “window dressing.”

Still, there are plenty of cases where contributions from citizens can highlight new problems that need addressingnew perspectives by which issues are understood, and new ideas for solving public problems—from administrative agencies seeking public input to city governments seeking to resolve resident complaints and citizens’ assemblies deliberating on climate policy. But even in these and other contexts, there is reason to doubt AI’s usefulness across the board. The possibilities of AI for civic engagement depend crucially on what exactly it is that policymakers want to learn from the public. For some types of learning, applications of AI can make major contributions to enhance the efficiency and efficacy of information processing. For others, there is no getting around the fundamental needs for human attention and context-specific knowledge in order to adequately make sense of public voices. We need to better understand these differences to avoid wasting resources on tools that might not deliver useful information…(More)”.