Good government data requires good statistics officials – but how motivated and competent are they?


Worldbank Blog: “Government data is only as reliable as the statistics officials who produce it. Yet, surprisingly little is known about these officials themselves. For decades, they have diligently collected data on others –  such as households and firms – to generate official statistics, from poverty rates to inflation figures. Yet, data about statistics officials themselves is missing. How competent are they at analyzing statistical data? How motivated are they to excel in their roles? Do they uphold integrity when producing official statistics, even in the face of opposing career incentives or political pressures? And what can National Statistical Offices (NSOs) do to cultivate a workforce that is competent, motivated, and ethical?

We surveyed 13,300 statistics officials in 14 countries in Latin America and the Caribbean to find out. Five results stand out. For further insights, consult our Inter-American Development Bank (IDB) report, Making National Statistical Offices Work Better.

1. The competence and management of statistics officials shape the quality of statistical data

Our survey included a short exam assessing basic statistical competencies, such as descriptive statistics and probability. Statistical competence correlates with data quality: NSOs with higher exam scores among employees tend to achieve better results in the World Bank’s Statistical Performance Indicators (r = 0.36).

NSOs with better management practices also have better statistical performance. For instance, NSOs with more robust recruitment and selection processes have better statistical performance (r = 0.62)…(More)”.

Nearly all Americans use AI, though most dislike it, poll shows


Axios: “The vast majority of Americans use products that involve AI, but their views of the technology remain overwhelmingly negative, according to a Gallup-Telescope survey published Wednesday.

Why it matters: The rapid advancement of generative AI threatens to have far-reaching consequences for Americans’ everyday lives, including reshaping the job marketimpacting elections, and affecting the health care industry.

The big picture: An estimated 99% of Americans used at least one AI-enabled product in the past week, but nearly two-thirds didn’t realize they were doing so, according to the poll’s findings.

  • These products included navigation apps, personal virtual assistants, weather forecasting apps, streaming services, shopping websites and social media platforms.
  • Ellyn Maese, a senior research consultant at Gallup, told Axios that the disconnect is because there is “a lot of confusion when it comes to what is just a computer program versus what is truly AI and intelligent.”

Zoom in: Despite its prevalent use, Americans’ views of AI remain overwhelmingly bleak, the survey found.

  • 72% of those surveyed had a “somewhat” or “very” negative opinion of how AI would impact the spread of false information, while 64% said the same about how it affects social connections.
  • The only area where a majority of Americans (61%) had a positive view of AI’s impact was regarding how it might help medical diagnosis and treatment…

State of play: The survey found that 68% of Americans believe the government and businesses equally bear responsibility for addressing the spread of false information related to AI.

  • 63% said the same about personal data privacy violations.
  • Majorities of those surveyed felt the same about combatting the unauthorized use of individuals’ likenesses (62%) and AI’s impact on job losses (52%).
  • In fact, the only area where Americans felt differently was when it came to national security threats; 62% of those surveyed said the government bore primary responsibility for reducing such threats…(More).”

Why Canada needs to embrace innovations in democracy


Article by Megan Mattes and Joanna Massie: “Although one-off democratic innovations like citizens’ assemblies are excellent approaches for tackling a big issue, more embedded types of innovations could be a powerful tool for maintaining an ongoing connection between public interest and political decision-making.

Innovative approaches to maintaining an ongoing, meaningful connection between people and policymakers are underway. In New Westminster, B.C., a standing citizen body called the Community Advisory Assembly has been convened since January 2024 to January 2025.

These citizen advisers are selected through random sampling to ensure the assembly’s demographic makeup is aligned with the overall population.

Over the last year, members have both given input on policy ideas initiated by New Westminster city council and initiated conversations on their own policy priorities. Notes from these discussions are passed on to council and city staff to consider their incorporation into policymaking.

The question is whether the project will live beyond its pilot.

Another similar and hopeful democratic innovation, the City of Toronto’s Planning Review Panel, ran for two terms before it was cancelled. In contrast, both the Paris city council and the state government of Ostbelgien (East Belgium) have convened permanent citizen advisory bodies to work alongside elected officials.

While public opinion is only one ingredient in government decision-making, ensuring democratic innovations are a standard component of policymaking could go a long way to enshrining public dialogue as a valuable governance tool.

Whether through annual participatory budgeting exercises or a standing citizen advisory body, democratic innovations can make public priorities a key focus of policy and restore government accountability to citizens…(More)”.

What’s a Fact, Anyway?


Essay by Fergus McIntosh: “…For journalists, as for anyone, there are certain shortcuts to trustworthiness, including reputation, expertise, and transparency—the sharing of sources, for example, or the prompt correction of errors. Some of these shortcuts are more perilous than others. Various outfits, positioning themselves as neutral guides to the marketplace of ideas, now tout evaluations of news organizations’ trustworthiness, but relying on these requires trusting in the quality and objectivity of the evaluation. Official data is often taken at face value, but numbers can conceal motives: think of the dispute over how to count casualties in recent conflicts. Governments, meanwhile, may use their powers over information to suppress unfavorable narratives: laws originally aimed at misinformation, many enacted during the COVID-19 pandemic, can hinder free expression. The spectre of this phenomenon is fuelling a growing backlash in America and elsewhere.

Although some categories of information may come to be considered inherently trustworthy, these, too, are in flux. For decades, the technical difficulty of editing photographs and videos allowed them to be treated, by most people, as essentially incontrovertible. With the advent of A.I.-based editing software, footage and imagery have swiftly become much harder to credit. Similar tools are already used to spoof voices based on only seconds of recorded audio. For anyone, this might manifest in scams (your grandmother calls, but it’s not Grandma on the other end), but for a journalist it also puts source calls into question. Technologies of deception tend to be accompanied by ones of detection or verification—a battery of companies, for example, already promise that they can spot A.I.-manipulated imagery—but they’re often locked in an arms race, and they never achieve total accuracy. Though chatbots and A.I.-enabled search engines promise to help us with research (when a colleague “interviewed” ChatGPT, it told him, “I aim to provide information that is as neutral and unbiased as possible”), their inability to provide sourcing, and their tendency to hallucinate, looks more like a shortcut to nowhere, at least for now. The resulting problems extend far beyond media: election campaigns, in which subtle impressions can lead to big differences in voting behavior, feel increasingly vulnerable to deepfakes and other manipulations by inscrutable algorithms. Like everyone else, journalists have only just begun to grapple with the implications.

In such circumstances, it becomes difficult to know what is true, and, consequently, to make decisions. Good journalism offers a way through, but only if readers are willing to follow: trust and naïveté can feel uncomfortably close. Gaining and holding that trust is hard. But failure—the end point of the story of generational decay, of gold exchanged for dross—is not inevitable. Fact checking of the sort practiced at The New Yorker is highly specific and resource-intensive, and it’s only one potential solution. But any solution must acknowledge the messiness of truth, the requirements of attention, the way we squint to see more clearly. It must tell you to say what you mean, and know that you mean it…(More)”.

What Could Citizens’ Assemblies Do for American Politics?


Essay by Nick Romeo: “Last July, an unusual letter arrived at Kathryn Kundmueller’s mobile home, in central Oregon. It invited her to enter a lottery that would select thirty residents of Deschutes County to deliberate for five days on youth homelessness—a visible and contentious issue in an area where the population and cost of living have spiked in recent years. Those chosen would be paid for their time—almost five hundred dollars—and asked to develop specific policy recommendations.

Kundmueller was being invited to join what is known as a citizens’ assembly. These gatherings do what most democracies only pretend to: trust normal people to make decisions on difficult policy questions. Many citizens’ assemblies follow a basic template. They impanel a random but representative cross-section of a population, give them high-quality information on a topic, and ask them to work together to reach a decision. In Europe, such groups have helped spur reform of the Irish constitution in order to legalize abortion, guided an Austrian pharmaceutical heiress on how to give away her wealth, and become a regular part of government in Paris and Belgium. Though still rare in America, the model reflects the striking idea that fundamental problems of politics—polarization, apathy, manipulation by special interests—can be transformed through radically direct democracy.

Kundmueller, who is generally frustrated by politics, was intrigued by the letter. She liked the prospect of helping to shape local policy, and the topic of housing insecurity had a particular resonance for her. As a teen-ager, following a falling-out with her father, she spent months bouncing between friends’ couches in Vermont. When she moved across the country to San Jose, after college, she lived in her car for a time while she searched for a stable job. She worked in finance but became disillusioned; now in her early forties, she ran a small housecleaning business. She still thought about living in a van and renting out her mobile home to save money…(More)”.

Will Artificial Intelligence Replace Us or Empower Us?


Article by Peter Coy: “…But A.I. could also be designed to empower people rather than replace them, as I wrote a year ago in a newsletter about the M.I.T. Shaping the Future of Work Initiative.

Which of those A.I. futures will be realized was a big topic at the San Francisco conference, which was the annual meeting of the American Economic Association, the American Finance Association and 65 smaller groups in the Allied Social Science Associations.

Erik Brynjolfsson of Stanford was one of the busiest economists at the conference, dashing from one panel to another to talk about his hopes for a human-centric A.I. and his warnings about what he has called the “Turing Trap.”

Alan Turing, the English mathematician and World War II code breaker, proposed in 1950 to evaluate the intelligence of computers by whether they could fool someone into thinking they were human. His “imitation game” led the field in an unfortunate direction, Brynjolfsson argues — toward creating machines that behaved as much like humans as possible, instead of like human helpers.

Henry Ford didn’t set out to build a car that could mimic a person’s walk, so why should A.I. experts try to build systems that mimic a person’s mental abilities? Brynjolfsson asked at one session I attended.

Other economists have made similar points: Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University use the term “so-so technologies” for systems that replace human beings without meaningfully increasing productivity, such as self-checkout kiosks in supermarkets.

People will need a lot more education and training to take full advantage of A.I.’s immense power, so that they aren’t just elbowed aside by it. “In fact, for each dollar spent on machine learning technology, companies may need to spend nine dollars on intangible human capital,” Brynjolfsson wrote in 2022, citing research by him and others…(More)”.

Kickstarting Collaborative, AI-Ready Datasets in the Life Sciences with Government-funded Projects


Article by Erika DeBenedictis, Ben Andrew & Pete Kelly: “In the age of Artificial Intelligence (AI), large high-quality datasets are needed to move the field of life science forward. However, the research community lacks strategies to incentivize collaboration on high-quality data acquisition and sharing. The government should fund collaborative roadmapping, certification, collection, and sharing of large, high-quality datasets in life science. In such a system, nonprofit research organizations engage scientific communities to identify key types of data that would be valuable for building predictive models, and define quality control (QC) and open science standards for collection of that data. Projects are designed to develop automated methods for data collection, certify data providers, and facilitate data collection in consultation with researchers throughout various scientific communities. Hosting of the resulting open data is subsidized as well as protected by security measures. This system would provide crucial incentives for the life science community to identify and amass large, high-quality open datasets that will immensely benefit researchers…(More)”.

The People Say


About: “The People Say is an online research hub that features first-hand insights from older adults and caregivers on the issues most important to them, as well as feedback from experts on policies affecting older adults. 

This project particularly focuses on the experiences of communities often under-consulted in policymaking, including older adults of color, those who are low income, and/or those who live in rural areas where healthcare isn’t easily accessible. The People Say is funded by The SCAN Foundation and developed by researchers and designers at the Public Policy Lab.

We believe that effective policymaking listens to most-affected communities—but policies and systems that serve older adults are typically formed with little to no input from older adults themselves. We hope The People Say will help policymakers hear the voices of older adults when shaping policy…(More)”

Government reform starts with data, evidence


Article by Kshemendra Paul: “It’s time to strengthen the use of dataevidence and transparency to stop driving with mud on the windshield and to steer the government toward improving management of its programs and operations.

Existing Government Accountability Office and agency inspectors general reports identify thousands of specific evidence-based recommendations to improve efficiency, economy and effectiveness, and reduce fraud, waste and abuse. Many of these recommendations aim at program design and requirements, highlighting specific instances of overlap, redundancy and duplication. Others describe inadequate internal controls to balance program integrity with the experience of the customer, contractor or grantee. While progress is being reported in part due to stronger partnerships with IGs, much remains to be done. Indeed, GAO’s 2023 High Risk List, which it has produced going back to 1990, shows surprisingly slow progress of efforts to reduce risk to government programs and operations.

Here are a few examples:

  • GAO estimates recent annual fraud of between $233 billion to $521 billion, or about 3% to 7% of federal spending. On the other hand, identified fraud with high-risk Recovery Act spending was held under 1% using data, transparency and partnerships with Offices of Inspectors General.
  • GAO and IGs have collectively identified hundreds of billions in potential cost savings or improvements not yet addressed by federal agencies.
  • GAO has recently described shortcomings with the government’s efforts to build evidence. While federal policymakers need good information to inform their decisions, the Commission on Evidence-Based Policymaking previously said, “too little evidence is produced to meet this need.”

One of the main reasons for agency sluggishness is the lack of agency and governmentwide use of synchronized, authoritative and shared data to support how the government manages itself.

For example, the Energy Department IG found that, “[t]he department often lacks the data necessary to make critical decisions, evaluate and effectively manage risks, or gain visibility into program results.” It is past time for the government to commit itself to move away from its widespread use of data calls, the error-prone, costly and manual aggregation of data used to support policy analysis and decision-making. Efforts to embrace data-informed approaches to manage government programs and operations are stymied by lack of basic agency and governmentwide data hygiene. While bright pockets exist, management gaps, as DOE OIG stated, “create blind spots in the universe of data that, if captured, could be used to more efficiently identify, track and respond to risks…”

The proposed approach starts with current agency operating models, then drives into management process integration to tackle root causes of dysfunction from the bottom up. It recognizes that inefficiency, fraud and other challenges are diffused, deeply embedded and have non-obvious interrelationships within the federal complex…(More)”

Survey of attitudes in a Danish public towards reuse of health data


Paper by Lea Skovgaard et al: “Everyday clinical care generates vast amounts of digital data. A broad range of actors are interested in reusing these data for various purposes. Such reuse of health data could support medical research, healthcare planning, technological innovation, and lead to increased financial revenue. Yet, reuse also raises questions about what data subjects think about the use of health data for various different purposes. Based on a survey with 1071 respondents conducted in 2021 in Denmark, this article explores attitudes to health data reuse. Denmark is renowned for its advanced integration of data infrastructures, facilitating data reuse. This is therefore a relevant setting from which to explore public attitudes to reuse, both as authorities around the globe are currently working to facilitate data reuse opportunities, and in the light of the recent agreement on the establishment in 2024 of the European Health Data Space (EHDS) within the European Union (EU). Our study suggests that there are certain forms of health data reuse—namely transnational data sharing, commercial involvement, and use of data as national economic assets—which risk undermining public support for health data reuse. However, some of the purposes that the EHDS is supposed to facilitate are these three controversial purposes. Failure to address these public concerns could well challenge the long-term legitimacy and sustainability of the data infrastructures currently under construction…(More)”