Nearly all Americans use AI, though most dislike it, poll shows


Axios: “The vast majority of Americans use products that involve AI, but their views of the technology remain overwhelmingly negative, according to a Gallup-Telescope survey published Wednesday.

Why it matters: The rapid advancement of generative AI threatens to have far-reaching consequences for Americans’ everyday lives, including reshaping the job marketimpacting elections, and affecting the health care industry.

The big picture: An estimated 99% of Americans used at least one AI-enabled product in the past week, but nearly two-thirds didn’t realize they were doing so, according to the poll’s findings.

  • These products included navigation apps, personal virtual assistants, weather forecasting apps, streaming services, shopping websites and social media platforms.
  • Ellyn Maese, a senior research consultant at Gallup, told Axios that the disconnect is because there is “a lot of confusion when it comes to what is just a computer program versus what is truly AI and intelligent.”

Zoom in: Despite its prevalent use, Americans’ views of AI remain overwhelmingly bleak, the survey found.

  • 72% of those surveyed had a “somewhat” or “very” negative opinion of how AI would impact the spread of false information, while 64% said the same about how it affects social connections.
  • The only area where a majority of Americans (61%) had a positive view of AI’s impact was regarding how it might help medical diagnosis and treatment…

State of play: The survey found that 68% of Americans believe the government and businesses equally bear responsibility for addressing the spread of false information related to AI.

  • 63% said the same about personal data privacy violations.
  • Majorities of those surveyed felt the same about combatting the unauthorized use of individuals’ likenesses (62%) and AI’s impact on job losses (52%).
  • In fact, the only area where Americans felt differently was when it came to national security threats; 62% of those surveyed said the government bore primary responsibility for reducing such threats…(More).”

Why Canada needs to embrace innovations in democracy


Article by Megan Mattes and Joanna Massie: “Although one-off democratic innovations like citizens’ assemblies are excellent approaches for tackling a big issue, more embedded types of innovations could be a powerful tool for maintaining an ongoing connection between public interest and political decision-making.

Innovative approaches to maintaining an ongoing, meaningful connection between people and policymakers are underway. In New Westminster, B.C., a standing citizen body called the Community Advisory Assembly has been convened since January 2024 to January 2025.

These citizen advisers are selected through random sampling to ensure the assembly’s demographic makeup is aligned with the overall population.

Over the last year, members have both given input on policy ideas initiated by New Westminster city council and initiated conversations on their own policy priorities. Notes from these discussions are passed on to council and city staff to consider their incorporation into policymaking.

The question is whether the project will live beyond its pilot.

Another similar and hopeful democratic innovation, the City of Toronto’s Planning Review Panel, ran for two terms before it was cancelled. In contrast, both the Paris city council and the state government of Ostbelgien (East Belgium) have convened permanent citizen advisory bodies to work alongside elected officials.

While public opinion is only one ingredient in government decision-making, ensuring democratic innovations are a standard component of policymaking could go a long way to enshrining public dialogue as a valuable governance tool.

Whether through annual participatory budgeting exercises or a standing citizen advisory body, democratic innovations can make public priorities a key focus of policy and restore government accountability to citizens…(More)”.

Generative Artificial Intelligence and Open Data: Guidelines and Best Practices


US Department of Commerce: “…This guidance provides actionable guidelines and best practices for publishing open data optimized for generative AI systems. While it is designed for use by the Department of Commerce and its bureaus, this guidance has been made publicly available to benefit open data publishers globally…(More)”. See also: A Fourth Wave of Open Data? Exploring the Spectrum of Scenarios for Open Data and Generative AI

What Could Citizens’ Assemblies Do for American Politics?


Essay by Nick Romeo: “Last July, an unusual letter arrived at Kathryn Kundmueller’s mobile home, in central Oregon. It invited her to enter a lottery that would select thirty residents of Deschutes County to deliberate for five days on youth homelessness—a visible and contentious issue in an area where the population and cost of living have spiked in recent years. Those chosen would be paid for their time—almost five hundred dollars—and asked to develop specific policy recommendations.

Kundmueller was being invited to join what is known as a citizens’ assembly. These gatherings do what most democracies only pretend to: trust normal people to make decisions on difficult policy questions. Many citizens’ assemblies follow a basic template. They impanel a random but representative cross-section of a population, give them high-quality information on a topic, and ask them to work together to reach a decision. In Europe, such groups have helped spur reform of the Irish constitution in order to legalize abortion, guided an Austrian pharmaceutical heiress on how to give away her wealth, and become a regular part of government in Paris and Belgium. Though still rare in America, the model reflects the striking idea that fundamental problems of politics—polarization, apathy, manipulation by special interests—can be transformed through radically direct democracy.

Kundmueller, who is generally frustrated by politics, was intrigued by the letter. She liked the prospect of helping to shape local policy, and the topic of housing insecurity had a particular resonance for her. As a teen-ager, following a falling-out with her father, she spent months bouncing between friends’ couches in Vermont. When she moved across the country to San Jose, after college, she lived in her car for a time while she searched for a stable job. She worked in finance but became disillusioned; now in her early forties, she ran a small housecleaning business. She still thought about living in a van and renting out her mobile home to save money…(More)”.

Which Health Facilities Have Been Impacted by L.A.-Area Fires? AI May Paint a Clearer Picture


Article by Andrew Schroeder: “One of the most important factors for humanitarian responders in these types of large-scale disaster situations is to understand the effects on the formal health system, upon which most people — and vulnerable communities in particular — rely upon in their neighborhoods. Evaluation of the impact of disasters on individual structures, including critical infrastructure such as health facilities, is traditionally a relatively slow and manually arduous process, involving extensive ground truth visitation by teams of assessment professionals.

Speeding up this process without losing accuracy, while potentially improving the safety and efficiency of assessment teams, is among the more important analytical efforts Direct Relief can undertake for response and recovery efforts. Manual assessments can now be effectively paired with AI-based analysis of satellite imagery to do just that…

With the advent of geospatial AI models trained on disaster damage impacts, ground assessment is not the only tool available to response agencies and others seeking to understand how much damage has occurred and the degree to which that damage may affect essential services for communities. The work of the Oregon State University team of experts in remote sensing-based post-disaster damage detection, led by Jamon Van Den Hoek and Corey Scher, was featured in the Financial Times on January 9.

Their modeling, based on Sentinel-1 satellite imagery, identified 21,757 structures overall, of which 11,124 were determined to have some level of damage. The Oregon State model does not distinguish between different levels of damage, and therefore cannot respond to certain types of questions that the manual inspections can respond to, but nevertheless the coverage area and the speed of detection have been much greater…(More)”.

Behaviour-based dependency networks between places shape urban economic resilience


Paper by Takahiro Yabe et al: “Disruptions, such as closures of businesses during pandemics, not only affect businesses and amenities directly but also influence how people move, spreading the impact to other businesses and increasing the overall economic shock. However, it is unclear how much businesses depend on each other during disruptions. Leveraging human mobility data and same-day visits in five US cities, we quantify dependencies between points of interest encompassing businesses, stores and amenities. We find that dependency networks computed from human mobility exhibit significantly higher rates of long-distance connections and biases towards specific pairs of point-of-interest categories. We show that using behaviour-based dependency relationships improves the predictability of business resilience during shocks by around 40% compared with distance-based models, and that neglecting behaviour-based dependencies can lead to underestimation of the spatial cascades of disruptions. Our findings underscore the importance of measuring complex relationships in patterns of human mobility to foster urban economic resilience to shocks…(More)”.

Kickstarting Collaborative, AI-Ready Datasets in the Life Sciences with Government-funded Projects


Article by Erika DeBenedictis, Ben Andrew & Pete Kelly: “In the age of Artificial Intelligence (AI), large high-quality datasets are needed to move the field of life science forward. However, the research community lacks strategies to incentivize collaboration on high-quality data acquisition and sharing. The government should fund collaborative roadmapping, certification, collection, and sharing of large, high-quality datasets in life science. In such a system, nonprofit research organizations engage scientific communities to identify key types of data that would be valuable for building predictive models, and define quality control (QC) and open science standards for collection of that data. Projects are designed to develop automated methods for data collection, certify data providers, and facilitate data collection in consultation with researchers throughout various scientific communities. Hosting of the resulting open data is subsidized as well as protected by security measures. This system would provide crucial incentives for the life science community to identify and amass large, high-quality open datasets that will immensely benefit researchers…(More)”.

Government reform starts with data, evidence


Article by Kshemendra Paul: “It’s time to strengthen the use of dataevidence and transparency to stop driving with mud on the windshield and to steer the government toward improving management of its programs and operations.

Existing Government Accountability Office and agency inspectors general reports identify thousands of specific evidence-based recommendations to improve efficiency, economy and effectiveness, and reduce fraud, waste and abuse. Many of these recommendations aim at program design and requirements, highlighting specific instances of overlap, redundancy and duplication. Others describe inadequate internal controls to balance program integrity with the experience of the customer, contractor or grantee. While progress is being reported in part due to stronger partnerships with IGs, much remains to be done. Indeed, GAO’s 2023 High Risk List, which it has produced going back to 1990, shows surprisingly slow progress of efforts to reduce risk to government programs and operations.

Here are a few examples:

  • GAO estimates recent annual fraud of between $233 billion to $521 billion, or about 3% to 7% of federal spending. On the other hand, identified fraud with high-risk Recovery Act spending was held under 1% using data, transparency and partnerships with Offices of Inspectors General.
  • GAO and IGs have collectively identified hundreds of billions in potential cost savings or improvements not yet addressed by federal agencies.
  • GAO has recently described shortcomings with the government’s efforts to build evidence. While federal policymakers need good information to inform their decisions, the Commission on Evidence-Based Policymaking previously said, “too little evidence is produced to meet this need.”

One of the main reasons for agency sluggishness is the lack of agency and governmentwide use of synchronized, authoritative and shared data to support how the government manages itself.

For example, the Energy Department IG found that, “[t]he department often lacks the data necessary to make critical decisions, evaluate and effectively manage risks, or gain visibility into program results.” It is past time for the government to commit itself to move away from its widespread use of data calls, the error-prone, costly and manual aggregation of data used to support policy analysis and decision-making. Efforts to embrace data-informed approaches to manage government programs and operations are stymied by lack of basic agency and governmentwide data hygiene. While bright pockets exist, management gaps, as DOE OIG stated, “create blind spots in the universe of data that, if captured, could be used to more efficiently identify, track and respond to risks…”

The proposed approach starts with current agency operating models, then drives into management process integration to tackle root causes of dysfunction from the bottom up. It recognizes that inefficiency, fraud and other challenges are diffused, deeply embedded and have non-obvious interrelationships within the federal complex…(More)”

Academic writing is getting harder to read—the humanities most of all


The Economist: “Academics have long been accused of jargon-filled writing that is impossible to understand. A recent cautionary tale was that of Ally Louks, a researcher who set off a social media storm with an innocuous post on X celebrating the completion of her PhD. If it was Ms Louks’s research topic (“olfactory ethics”—the politics of smell) that caught the attention of online critics, it was her verbose thesis abstract that further provoked their ire. In two weeks, the post received more than 21,000 retweets and 100m views.

Although the abuse directed at Ms Louks reeked of misogyny and anti-intellectualism—which she admirably shook off—the reaction was also a backlash against an academic use of language that is removed from normal life. Inaccessible writing is part of the problem. Research has become harder to read, especially in the humanities and social sciences. Though authors may argue that their work is written for expert audiences, much of the general public suspects that some academics use gobbledygook to disguise the fact that they have nothing useful to say. The trend towards more opaque prose hardly allays this suspicion…(More)”.

How Your Car Might Be Making Roads Safer


Article by Kashmir Hill: “Darcy Bullock, a civil engineering professor at Purdue University, turns to his computer screen to get information about how fast cars are traveling on Interstate 65, which runs 887 miles from Lake Michigan to the Gulf of Mexico. It’s midafternoon on a Monday, and his screen is mostly filled with green dots indicating that traffic is moving along nicely. But near an exit on the outskirts of Indianapolis, an angry red streak shows that cars have stopped moving.

A traffic camera nearby reveals the cause: A car has spun out, causing gridlock.

In recent years, vehicles that have wireless connectivity have become a critical source of information for transportation departments and for academics who study traffic patterns. The data these vehicles emit — including speed, how hard they brake and accelerate, and even if their windshield wipers are on — can offer insights into dangerous road conditions, congestion or poorly timed traffic signals.

“Our cars know more about our roads than agencies do,” said Dr. Bullock, who regularly works with the Indiana Department of Transportation to conduct studies on how to reduce traffic congestion and increase road safety. He credits connected-car data with detecting hazards that would have taken years — and many accidents — to find in the past.

The data comes primarily from commercial trucks and from cars made by General Motors that are enrolled in OnStar, G.M.’s internet-connected service. (Drivers know OnStar as the service that allows them to lock their vehicles from a smartphone app or find them if they have been stolen.) Federal safety guidelines require commercial truck drivers to be routinely monitored, but people driving G.M. vehicles may be surprised to know that their data is being collected, though it is indicated in the fine print of the company’s privacy policy…(More)”.