Government reform starts with data, evidence


Article by Kshemendra Paul: “It’s time to strengthen the use of dataevidence and transparency to stop driving with mud on the windshield and to steer the government toward improving management of its programs and operations.

Existing Government Accountability Office and agency inspectors general reports identify thousands of specific evidence-based recommendations to improve efficiency, economy and effectiveness, and reduce fraud, waste and abuse. Many of these recommendations aim at program design and requirements, highlighting specific instances of overlap, redundancy and duplication. Others describe inadequate internal controls to balance program integrity with the experience of the customer, contractor or grantee. While progress is being reported in part due to stronger partnerships with IGs, much remains to be done. Indeed, GAO’s 2023 High Risk List, which it has produced going back to 1990, shows surprisingly slow progress of efforts to reduce risk to government programs and operations.

Here are a few examples:

  • GAO estimates recent annual fraud of between $233 billion to $521 billion, or about 3% to 7% of federal spending. On the other hand, identified fraud with high-risk Recovery Act spending was held under 1% using data, transparency and partnerships with Offices of Inspectors General.
  • GAO and IGs have collectively identified hundreds of billions in potential cost savings or improvements not yet addressed by federal agencies.
  • GAO has recently described shortcomings with the government’s efforts to build evidence. While federal policymakers need good information to inform their decisions, the Commission on Evidence-Based Policymaking previously said, “too little evidence is produced to meet this need.”

One of the main reasons for agency sluggishness is the lack of agency and governmentwide use of synchronized, authoritative and shared data to support how the government manages itself.

For example, the Energy Department IG found that, “[t]he department often lacks the data necessary to make critical decisions, evaluate and effectively manage risks, or gain visibility into program results.” It is past time for the government to commit itself to move away from its widespread use of data calls, the error-prone, costly and manual aggregation of data used to support policy analysis and decision-making. Efforts to embrace data-informed approaches to manage government programs and operations are stymied by lack of basic agency and governmentwide data hygiene. While bright pockets exist, management gaps, as DOE OIG stated, “create blind spots in the universe of data that, if captured, could be used to more efficiently identify, track and respond to risks…”

The proposed approach starts with current agency operating models, then drives into management process integration to tackle root causes of dysfunction from the bottom up. It recognizes that inefficiency, fraud and other challenges are diffused, deeply embedded and have non-obvious interrelationships within the federal complex…(More)”

Academic writing is getting harder to read—the humanities most of all


The Economist: “Academics have long been accused of jargon-filled writing that is impossible to understand. A recent cautionary tale was that of Ally Louks, a researcher who set off a social media storm with an innocuous post on X celebrating the completion of her PhD. If it was Ms Louks’s research topic (“olfactory ethics”—the politics of smell) that caught the attention of online critics, it was her verbose thesis abstract that further provoked their ire. In two weeks, the post received more than 21,000 retweets and 100m views.

Although the abuse directed at Ms Louks reeked of misogyny and anti-intellectualism—which she admirably shook off—the reaction was also a backlash against an academic use of language that is removed from normal life. Inaccessible writing is part of the problem. Research has become harder to read, especially in the humanities and social sciences. Though authors may argue that their work is written for expert audiences, much of the general public suspects that some academics use gobbledygook to disguise the fact that they have nothing useful to say. The trend towards more opaque prose hardly allays this suspicion…(More)”.

Behaviour-based dependency networks between places shape urban economic resilience


Paper by Takahiro Yabe et al: “Disruptions, such as closures of businesses during pandemics, not only affect businesses and amenities directly but also influence how people move, spreading the impact to other businesses and increasing the overall economic shock. However, it is unclear how much businesses depend on each other during disruptions. Leveraging human mobility data and same-day visits in five US cities, we quantify dependencies between points of interest encompassing businesses, stores and amenities. We find that dependency networks computed from human mobility exhibit significantly higher rates of long-distance connections and biases towards specific pairs of point-of-interest categories. We show that using behaviour-based dependency relationships improves the predictability of business resilience during shocks by around 40% compared with distance-based models, and that neglecting behaviour-based dependencies can lead to underestimation of the spatial cascades of disruptions. Our findings underscore the importance of measuring complex relationships in patterns of human mobility to foster urban economic resilience to shocks…(More)”.

How Your Car Might Be Making Roads Safer


Article by Kashmir Hill: “Darcy Bullock, a civil engineering professor at Purdue University, turns to his computer screen to get information about how fast cars are traveling on Interstate 65, which runs 887 miles from Lake Michigan to the Gulf of Mexico. It’s midafternoon on a Monday, and his screen is mostly filled with green dots indicating that traffic is moving along nicely. But near an exit on the outskirts of Indianapolis, an angry red streak shows that cars have stopped moving.

A traffic camera nearby reveals the cause: A car has spun out, causing gridlock.

In recent years, vehicles that have wireless connectivity have become a critical source of information for transportation departments and for academics who study traffic patterns. The data these vehicles emit — including speed, how hard they brake and accelerate, and even if their windshield wipers are on — can offer insights into dangerous road conditions, congestion or poorly timed traffic signals.

“Our cars know more about our roads than agencies do,” said Dr. Bullock, who regularly works with the Indiana Department of Transportation to conduct studies on how to reduce traffic congestion and increase road safety. He credits connected-car data with detecting hazards that would have taken years — and many accidents — to find in the past.

The data comes primarily from commercial trucks and from cars made by General Motors that are enrolled in OnStar, G.M.’s internet-connected service. (Drivers know OnStar as the service that allows them to lock their vehicles from a smartphone app or find them if they have been stolen.) Federal safety guidelines require commercial truck drivers to be routinely monitored, but people driving G.M. vehicles may be surprised to know that their data is being collected, though it is indicated in the fine print of the company’s privacy policy…(More)”.

Setting the Standard: Statistical Agencies’ Unique Role in Building Trustworthy AI


Article by Corinna Turbes: “As our national statistical agencies grapple with new challenges posed by artificial intelligence (AI), many agencies face intense pressure to embrace generative AI as a way to reach new audiences and demonstrate technological relevance. However, the rush to implement generative AI applications risks undermining these agencies’ fundamental role as authoritative data sources. Statistical agencies’ foundational mission—producing and disseminating high-quality, authoritative statistical information—requires a more measured approach to AI adoption.

Statistical agencies occupy a unique and vital position in our data ecosystem, entrusted with creating the reliable statistics that form the backbone of policy decisions, economic planning, and social research. The work of these agencies demands exceptional precision, transparency, and methodological rigor. Implementation of generative AI interfaces, while technologically impressive, could inadvertently compromise the very trust and accuracy that make these agencies indispensable.

While public-facing interfaces play a valuable role in democratizing access to statistical information, statistical agencies need not—and often should not—rely on generative AI to be effective in that effort. For statistical agencies, an extractive AI approach – which retrieves and presents existing information from verified databases rather than generating new content – offers a more appropriate path forward. By pulling from verified, structured datasets and providing precise, accurate responses, extractive AI systems can maintain the high standards of accuracy required while making statistical information more accessible to users who may find traditional databases overwhelming. An extractive, rather than generative,  approach allows agencies to modernize data delivery while preserving their core mission of providing reliable, verifiable statistical information…(More)”

Bad data costs Americans trillions. Let’s fix it with a renewed data strategy


Article by Nick Hart & Suzette Kent: “Over the past five years, the federal government lost $200-to-$500 billion per year in fraud to improper payments — that’s up to $3,000 taken from every working American’s pocket annually. Since 2003, these preventable losses have totaled an astounding $2.7 trillion. But here’s the good news: We already have the data and technology to greatly eliminate this waste in the years ahead. The operational structure and legal authority to put these tools to work protecting taxpayer dollars needs to be refreshed and prioritized.

The challenge is straightforward: Government agencies often can’t effectively share and verify basic information before sending payments. For example, federal agencies may not be able to easily check if someone is deceased, verify income or detect duplicate payments across programs…(More)”.

AI, huge hacks leave consumers facing a perfect storm of privacy perils


Article by Joseph Menn: “Hackers are using artificial intelligence to mine unprecedented troves of personal information dumped online in the past year, along with unregulated commercial databases, to trick American consumers and even sophisticated professionals into giving up control of bank and corporate accounts.

Armed with sensitive health informationcalling records and hundreds of millions of Social Security numbers, criminals and operatives of countries hostile to the United States are crafting emails, voice calls and texts that purport to come from government officials, co-workers or relatives needing help, or familiar financial organizations trying to protect accounts instead of draining them.

“There is so much data out there that can be used for phishing and password resets that it has reduced overall security for everyone, and artificial intelligence has made it much easier to weaponize,” said Ashkan Soltani, executive director of the California Privacy Protection Agency, the only such state-level agency.

The losses reported to the FBI’s Internet Crime Complaint Center nearly tripled from 2020 to 2023, to $12.5 billion, and a number of sensitive breaches this year have only increased internet insecurity. The recently discovered Chinese government hacks of U.S. telecommunications companies AT&T, Verizon and others, for instance, were deemed so serious that government officials are being told not to discuss sensitive matters on the phone, some of those officials said in interviews. A Russian ransomware gang’s breach of Change Healthcare in February captured data on millions of Americans’ medical conditions and treatments, and in August, a small data broker, National Public Data, acknowledged that it had lost control of hundreds of millions of Social Security numbers and addresses now being sold by hackers.

Meanwhile, the capabilities of artificial intelligence are expanding at breakneck speed. “The risks of a growing surveillance industry are only heightened by AI and other forms of predictive decision-making, which are fueled by the vast datasets that data brokers compile,” U.S. Consumer Financial Protection Bureau Director Rohit Chopra said in September…(More)”.

Scientists Scramble to Save Climate Data from Trump—Again


Article by Chelsea Harvey: “Eight years ago, as the Trump administration was getting ready to take office for the first time, mathematician John Baez was making his own preparations.

Together with a small group of friends and colleagues, he was arranging to download large quantities of public climate data from federal websites in order to safely store them away. Then-President-elect Donald Trump had repeatedly denied the basic science of climate change and had begun nominating climate skeptics for cabinet posts. Baez, a professor at the University of California, Riverside, was worried the information — everything from satellite data on global temperatures to ocean measurements of sea-level rise — might soon be destroyed.

His effort, known as the Azimuth Climate Data Backup Project, archived at least 30 terabytes of federal climate data by the end of 2017.

In the end, it was an overprecaution.

The first Trump administration altered or deleted numerous federal web pages containing public-facing climate information, according to monitoring efforts by the nonprofit Environmental Data and Governance Initiative (EDGI), which tracks changes on federal websites. But federal databases, containing vast stores of globally valuable climate information, remained largely intact through the end of Trump’s first term.

Yet as Trump prepares to take office again, scientists are growing more worried.

Federal datasets may be in bigger trouble this time than they were under the first Trump administration, they say. And they’re preparing to begin their archiving efforts anew.

“This time around we expect them to be much more strategic,” said Gretchen Gehrke, EDGI’s website monitoring program lead. “My guess is that they’ve learned their lessons.”

The Trump transition team didn’t respond to a request for comment.

Like Baez’s Azimuth project, EDGI was born in 2016 in response to Trump’s first election. They weren’t the only ones…(More)”.

Shifting Patterns of Social Interaction: Exploring the Social Life of Urban Spaces Through A.I.


Paper by Arianna Salazar-Miranda, et al: “We analyze changes in pedestrian behavior over a 30-year period in four urban public spaces located in New York, Boston, and Philadelphia. Building on William Whyte’s observational work from 1980, where he manually recorded pedestrian behaviors, we employ computer vision and deep learning techniques to examine video footage from 1979-80 and 2008-10. Our analysis measures changes in walking speed, lingering behavior, group sizes, and group formation. We find that the average walking speed has increased by 15%, while the time spent lingering in these spaces has halved across all locations. Although the percentage of pedestrians walking alone remained relatively stable (from 67% to 68%), the frequency of group encounters declined, indicating fewer interactions in public spaces. This shift suggests that urban residents increasingly view streets as thoroughfares rather than as social spaces, which has important implications for the role of public spaces in fostering social engagement…(More)”.

Artificial Intelligence and the Future of Work


Report by the National Academies: “AI technology is at an inflection point: a surge of technological progress has driven the rapid development and adoption of generative AI systems, such as ChatGPT, which are capable of generating text, images, or other content based on user requests.

This technical progress is likely to continue in coming years, with the potential to complement or replace human labor in certain tasks and reshape job markets. However, it is difficult to predict exactly which new AI capabilities might emerge, and when these advances might occur.

This National Academies’ report evaluates recent advances in AI technology and their implications for economic productivity, job stability, and income inequality, identifying research opportunities and data needs to equip workers and policymakers to flexibly respond to AI developments…(More)”