The British state is blind


The Economist: “Britiain is a bit bigger than it thought. In 2023 net migration stood at 906,000 people, rather more than the 740,000 previously estimated, according to the Office for National Statistics. It is equivalent to discovering an extra Slough. New numbers for 2022 also arrived. At first the ONS thought net migration stood at 606,000. Now it reckons the figure was 872,000, a difference roughly the size of Stoke-on-Trent, a small English city.

If statistics enable the state to see, then the British government is increasingly short-sighted. Fundamental questions, such as how many people arrive each year, are now tricky to answer. How many people are in work? The answer is fuzzy. Just how big is the backlog of court cases? The Ministry of Justice will not say, because it does not know. Britain is a blind state.

This causes all sorts of problems. The Labour Force Survey, once a gold standard of data collection, now struggles to provide basic figures. At one point the Resolution Foundation, an economic think-tank, reckoned the ONS had underestimated the number of workers by almost 1m since 2019. Even after the ONS rejigged its tally on December 3rd, the discrepancy is still perhaps 500,000, Resolution reckons. Things are so bad that Andrew Bailey, the governor of the Bank of England, makes jokes about the inaccuracy of Britain’s job-market stats in after-dinner speeches—akin to a pilot bursting out of the cockpit mid-flight and asking to borrow a compass, with a chuckle.

Sometimes the sums in question are vast. When the Department for Work and Pensions put out a new survey on household income in the spring, it was missing about £40bn ($51bn) of benefit income, roughly 1.5% of gdp or 13% of all welfare spending. This makes things like calculating the rate of child poverty much harder. Labour mps want this line to go down. Yet it has little idea where the line is to begin with.

Even small numbers are hard to count. Britain has a backlog of court cases. How big no one quite knows: the Ministry of Justice has not published any data on it since March. In the summer, concerned about reliability, it held back the numbers (which means the numbers it did publish are probably wrong, says the Institute for Government, another think-tank). And there is no way of tracking someone from charge to court to prison to probation. Justice is meant to be blind, but not to her own conduct…(More)”.

AI, huge hacks leave consumers facing a perfect storm of privacy perils


Article by Joseph Menn: “Hackers are using artificial intelligence to mine unprecedented troves of personal information dumped online in the past year, along with unregulated commercial databases, to trick American consumers and even sophisticated professionals into giving up control of bank and corporate accounts.

Armed with sensitive health informationcalling records and hundreds of millions of Social Security numbers, criminals and operatives of countries hostile to the United States are crafting emails, voice calls and texts that purport to come from government officials, co-workers or relatives needing help, or familiar financial organizations trying to protect accounts instead of draining them.

“There is so much data out there that can be used for phishing and password resets that it has reduced overall security for everyone, and artificial intelligence has made it much easier to weaponize,” said Ashkan Soltani, executive director of the California Privacy Protection Agency, the only such state-level agency.

The losses reported to the FBI’s Internet Crime Complaint Center nearly tripled from 2020 to 2023, to $12.5 billion, and a number of sensitive breaches this year have only increased internet insecurity. The recently discovered Chinese government hacks of U.S. telecommunications companies AT&T, Verizon and others, for instance, were deemed so serious that government officials are being told not to discuss sensitive matters on the phone, some of those officials said in interviews. A Russian ransomware gang’s breach of Change Healthcare in February captured data on millions of Americans’ medical conditions and treatments, and in August, a small data broker, National Public Data, acknowledged that it had lost control of hundreds of millions of Social Security numbers and addresses now being sold by hackers.

Meanwhile, the capabilities of artificial intelligence are expanding at breakneck speed. “The risks of a growing surveillance industry are only heightened by AI and other forms of predictive decision-making, which are fueled by the vast datasets that data brokers compile,” U.S. Consumer Financial Protection Bureau Director Rohit Chopra said in September…(More)”.

Scientists Scramble to Save Climate Data from Trump—Again


Article by Chelsea Harvey: “Eight years ago, as the Trump administration was getting ready to take office for the first time, mathematician John Baez was making his own preparations.

Together with a small group of friends and colleagues, he was arranging to download large quantities of public climate data from federal websites in order to safely store them away. Then-President-elect Donald Trump had repeatedly denied the basic science of climate change and had begun nominating climate skeptics for cabinet posts. Baez, a professor at the University of California, Riverside, was worried the information — everything from satellite data on global temperatures to ocean measurements of sea-level rise — might soon be destroyed.

His effort, known as the Azimuth Climate Data Backup Project, archived at least 30 terabytes of federal climate data by the end of 2017.

In the end, it was an overprecaution.

The first Trump administration altered or deleted numerous federal web pages containing public-facing climate information, according to monitoring efforts by the nonprofit Environmental Data and Governance Initiative (EDGI), which tracks changes on federal websites. But federal databases, containing vast stores of globally valuable climate information, remained largely intact through the end of Trump’s first term.

Yet as Trump prepares to take office again, scientists are growing more worried.

Federal datasets may be in bigger trouble this time than they were under the first Trump administration, they say. And they’re preparing to begin their archiving efforts anew.

“This time around we expect them to be much more strategic,” said Gretchen Gehrke, EDGI’s website monitoring program lead. “My guess is that they’ve learned their lessons.”

The Trump transition team didn’t respond to a request for comment.

Like Baez’s Azimuth project, EDGI was born in 2016 in response to Trump’s first election. They weren’t the only ones…(More)”.

Can AI review the scientific literature — and figure out what it all means?


Article by Helen Pearson: “When Sam Rodriques was a neurobiology graduate student, he was struck by a fundamental limitation of science. Even if researchers had already produced all the information needed to understand a human cell or a brain, “I’m not sure we would know it”, he says, “because no human has the ability to understand or read all the literature and get a comprehensive view.”

Five years later, Rodriques says he is closer to solving that problem using artificial intelligence (AI). In September, he and his team at the US start-up FutureHouse announced that an AI-based system they had built could, within minutes, produce syntheses of scientific knowledge that were more accurate than Wikipedia pages1. The team promptly generated Wikipedia-style entries on around 17,000 human genes, most of which previously lacked a detailed page.How AI-powered science search engines can speed up your research

Rodriques is not the only one turning to AI to help synthesize science. For decades, scholars have been trying to accelerate the onerous task of compiling bodies of research into reviews. “They’re too long, they’re incredibly intensive and they’re often out of date by the time they’re written,” says Iain Marshall, who studies research synthesis at King’s College London. The explosion of interest in large language models (LLMs), the generative-AI programs that underlie tools such as ChatGPT, is prompting fresh excitement about automating the task…(More)”.

Courts in Buenos Aires are using ChatGPT to draft rulings


Article by Victoria Mendizabal: “In May, the Public Prosecution Service of the City of Buenos Aires began using generative AI to predict rulings for some public employment cases related to salary demands.

Since then, justice employees at the office for contentious administrative and tax matters of the city of Buenos Aires have uploaded case documents into ChatGPT, which analyzes patterns, offers a preliminary classification from a catalog of templates, and drafts a decision. So far, ChatGPT has been used for 20 legal sentences.

The use of generative AI has cut down the time it takes to draft a sentence from an hour to about 10 minutes, according to recent studies conducted by the office.

“We, as professionals, are not the main characters anymore. We have become editors,” Juan Corvalán, deputy attorney general in contentious administrative and tax matters, told Rest of World.

The introduction of generative AI tools has improved efficiency at the office, but it has also prompted concerns within the judiciary and among independent legal experts about possiblebiases, the treatment of personal data, and the emergence of hallucinations. Similar concerns have echoed beyond Argentina’s borders.

“We, as professionals, are not the main characters anymore. We have become editors.”

“Any inconsistent use, such as sharing sensitive information, could have a considerable legal cost,” Lucas Barreiro, a lawyer specializing in personal data protection and a member of Privaia, a civil association dedicated to the defense of human rights in the digital era, told Rest of World.

Judges in the U.S. have voiced skepticism about the use of generative AI in the courts, with Manhattan Federal Judge Edgardo Ramos saying earlier this year that “ChatGPT has been shown to be an unreliable resource.” In Colombia and the Netherlands, the use of ChatGPT by judges was criticized by local experts. But not everyone is concerned: A court of appeals judge in the U.K. who used ChatGPT to write part of a judgment said that it was “jolly useful.”

For Corvalán, the move to generative AI is the culmination of a years-long transformation within the City of Buenos Aires’ attorney general’s office.In 2017, Corvalán put together a group of developers to train an AI-powered system called PROMETEA, which was intended to automate judicial tasks and expedite case proceedings. The team used more than 300,000 rulings and case files related to housing protection, public employment bonuses, enforcement of unpaid fines, and denial of cab licenses to individuals with criminal records…(More)”.

Using generative AI for crisis foresight


Article by Antonin Kenens and Josip Ivanovic: “What if the next time you discuss a complex future and its potential crises, it could be transformed from a typical meeting into an immersive experience? That’s exactly what we did at a recent strategy meeting of UNDP’s Crisis Bureau and Bureau for Policy and Programme Support.  

In an environment where workshops and meetings can often feel monotonous, we aimed to break the mold. By using AI-generated videos, we brought our discussion to life, reflecting the realities of developing nations and immersing participants in the critical issues affecting our region.  In today’s rapidly changing world, the ability to anticipate and prepare for potential crises is more crucial than ever. Crisis foresight involves identifying and analyzing possible future crises to develop strategies that can mitigate their impact. This proactive approach, highlighted multiple times in the pact for the future, is essential for effective governance and sustainable development in Europe and Central Asia and the rest of the world.

graphical user interface
Visualization of the consequences of pollution in Joraland.

Our idea behind creating AI-generated videos was to provide a vivid, immersive experience that would engage viewers and stimulate active participation by sharing their reflections on the challenges and opportunities in developing countries. We presented fictional yet relatable scenarios to gather the participants of the meeting around a common view and create a sense of urgency and importance around UNDP’s strategic priorities and initiatives. 

This approach not only captured attention but also sparked deeper engagement and thought-provoking conversations…(More)”.

What AI Can’t Do for Democracy


Essay by Daniel Berliner: “In short, there is increasing optimism among both theorists and practitioners over the potential for technology-enabled civic engagement to rejuvenate or deepen democracy. Is this optimism justified?

The answer depends on how we think about what civic engagement can do. Political representatives are often unresponsive to the preferences of ordinary people. Their misperceptions of public needs and preferences are partly to blame, but the sources of democratic dysfunction are much deeper and more structural than information alone. Working to ensure many more “citizens’ voices are truly heard” will thus do little to improve government responsiveness in contexts where the distribution of power means that policymakers have no incentive to do what citizens say. And as some critics have argued, it can even distract from recognizing and remedying other problems, creating a veneer of legitimacy—what health policy expert Sherry Arnstein once famously derided as mere “window dressing.”

Still, there are plenty of cases where contributions from citizens can highlight new problems that need addressingnew perspectives by which issues are understood, and new ideas for solving public problems—from administrative agencies seeking public input to city governments seeking to resolve resident complaints and citizens’ assemblies deliberating on climate policy. But even in these and other contexts, there is reason to doubt AI’s usefulness across the board. The possibilities of AI for civic engagement depend crucially on what exactly it is that policymakers want to learn from the public. For some types of learning, applications of AI can make major contributions to enhance the efficiency and efficacy of information processing. For others, there is no getting around the fundamental needs for human attention and context-specific knowledge in order to adequately make sense of public voices. We need to better understand these differences to avoid wasting resources on tools that might not deliver useful information…(More)”.

A Second Academic Exodus From X?


Article by Josh Moody: “Two years ago, after Elon Musk bought Twitter for $44 billion, promptly renaming it X, numerous academics decamped from the platform. Now, in the wake of a presidential election fraught with online disinformation, a second exodus from the social media site appears underway.

Academics, including some with hundreds of thousands of followers, announced departures from the platform in the immediate aftermath of the election, decrying the toxicity of the website and objections to Musk and how he wielded the platform to back President-elect Donald Trump. The business mogul threw millions of dollars behind Trump and personally campaigned for him this fall. Musk also personally advanced various debunked conspiracy theories during the election cycle.

Amid another wave of exits, some users see this as the end of Academic Twitter, which was already arguably in its death throes…

LeBlanc, Kamola and Rosen all mentioned that they were moving to the platform Bluesky, which has grown to 14.5 million users, welcoming more than 700,000 new accounts in recent days. In September, Bluesky had nine million users…

A study published in PS: Political Science & Politics last month concluded that academics began to engage less after Musk bought the platform. But the peak of disengagement wasn’t when the billionaire took over the site in October 2022 but rather the next month, when he reinstated Donald Trump’s account, which the platform’s previous owners deactivated following the Jan. 6, 2021, insurrection, which he encouraged.

The researchers reviewed 15,700 accounts from academics in economics, political science, sociology and psychology for their study.

James Bisbee, a political science professor at Vanderbilt University and article co-author, wrote via email that changes to the platform, particularly to the application programming interface, or API, undermined their ability to collect data for their research.

“Twitter used to be an amazing source of data for political scientists (and social scientists more broadly) thanks in part to its open data ethos,” Bisbee wrote. “Since Musk’s takeover, this is no longer the case, severely limiting the types of conclusions we could draw, and theories we could test, on this platform.”

To Bisbee, that loss is an understated issue: “Along with many other troubling developments on X since the change in ownership, the amputation of data access should not be ignored.”..(More)”

The Death of Search


Article by Matteo Wong: “For nearly two years, the world’s biggest tech companies have said that AI will transform the web, your life, and the world. But first, they are remaking the humble search engine.

Chatbots and search, in theory, are a perfect match. A standard Google search interprets a query and pulls up relevant results; tech companies have spent tens or hundreds of millions of dollars engineering chatbots that interpret human inputs, synthesize information, and provide fluent, useful responses. No more keyword refining or scouring Wikipedia—ChatGPT will do it all. Search is an appealing target, too: Shaping how people navigate the internet is tantamount to shaping the internet itself.

Months of prophesying about generative AI have now culminated, almost all at once, in what may be the clearest glimpse yet into the internet’s future. After a series of limited releases and product demos, mired with various setbacks and embarrassing errors, tech companies are debuting AI-powered search engines as fully realized, all-inclusive products. Last Monday, Google announced that it would launch its AI Overviews in more than 100 new countries; that feature will now reach more than 1 billion users a month. Days later, OpenAI announced a new search function in ChatGPT, available to paid users for now and soon opening to the public. The same afternoon, the AI-search start-up Perplexity shared instructions for making its “answer engine” the default search tool in your web browser.

For the past week, I have been using these products in a variety of ways: to research articles, follow the election, and run everyday search queries. In turn I have scried, as best I can, into the future of how billions of people will access, relate to, and synthesize information. What I’ve learned is that these products are at once unexpectedly convenient, frustrating, and weird. These tools’ current iterations surprised and, at times, impressed me, yet even when they work perfectly, I’m not convinced that AI search is a wise endeavor…(More)”.

Congress should designate an entity to oversee data security, GAO says


Article by Matt Bracken: “Federal agencies may need to rethink how they handle individuals’ personal data to protect their civil rights and civil liberties, a congressional watchdog said in a new report Tuesday.

Without federal guidance governing the protection of the public’s civil rights and liberties, agencies have pursued a patchwork system of policies tied to the collection, sharing and use of data, the Government Accountability Office said

To address that problem head-on, the GAO is recommending that Congress select “an appropriate federal entity” to produce guidance or regulations regarding data protection that would apply to all agencies, giving that entity “the explicit authority to make needed technical and policy choices or explicitly stating Congress’s own choices.”

That recommendation was formed after the GAO sent a questionnaire to all 24 Chief Financial Officers Act agencies asking for information about their use of emerging technologies and data capabilities and how they’re guaranteeing that personally identifiable information is safeguarded.

The GAO found that 16 of those CFO Act agencies have policies or procedures in place to protect civil rights and civil liberties with regard to data use, while the other eight have not taken steps to do the same.

The most commonly cited issues for agencies in their efforts to protect the civil rights and civil liberties of the public were “complexities in handling protections associated with new and emerging technologies” and “a lack of qualified staff possessing needed skills in civil rights, civil liberties, and emerging technologies.”

“Further, eight of the 24 agencies believed that additional government-wide law or guidance would strengthen consistency in addressing civil rights and civil liberties protections,” the GAO wrote. “One agency noted that such guidance could eliminate the hodge-podge approach to the governance of data and technology.”

All 24 CFO Act agencies have internal offices to “handle the protection of the public’s civil rights as identified in federal laws,” with much of that work centered on the handling of civil rights violations and related complaints. Four agencies — the departments of Defense, Homeland Security, Justice and Education — have offices to specifically manage civil liberty protections across their entire agencies. The other 20 agencies have mostly adopted a “decentralized approach to protecting civil liberties, including when collecting, sharing, and using data,” the GAO noted…(More)”.