Artificial Intelligence: Generative AI’s Environmental and Human Effects


GAO Report: “Generative artificial intelligence (AI) could revolutionize entire industries. In the nearer term, it may dramatically increase productivity and transform daily tasks in many sectors. However, both its benefits and risks, including its environmental and human effects, are unknown or unclear.

Generative AI uses significant energy and water resources, but companies are generally not reporting details of these uses. Most estimates of environmental effects of generative AI technologies have focused on quantifying the energy consumed, and carbon emissions associated with generating that energy, required to train the generative AI model. Estimates of water consumption by generative AI are limited. Generative AI is expected to be a driving force for data center demand, but what portion of data center electricity consumption is related to generative AI is unclear. According to the International Energy Agency, U.S. data center electricity consumption was approximately 4 percent of U.S. electricity demand in 2022 and could be 6 percent of demand in 2026.

While generative AI may bring beneficial effects for people, GAO highlights five risks and challenges that could result in negative human effects on society, culture, and people from generative AI (see figure). For example, unsafe systems may produce outputs that compromise safety, such as inaccurate information, undesirable content, or the enabling of malicious behavior. However, definitive statements about these risks and challenges are difficult to make because generative AI is rapidly evolving, and private developers do not disclose some key technical information.

Selected generative artificial antelligence risks and challenges that could result in human effects

GAO identified policy options to consider that could enhance the benefits or address the challenges of environmental and human effects of generative AI. These policy options identify possible actions by policymakers, which include Congress, federal agencies, state and local governments, academic and research institutions, and industry. In addition, policymakers could choose to maintain the status quo, whereby they would not take additional action beyond current efforts. See below for details on the policy options…(More)”.

Guiding the provision of quality policy advice: the 5D model


Paper by Christopher Walker and Sally Washington: “… presents a process model to guide the production of quality policy advice. The work draws on engagement with both public sector practitioners and academics to design a process model for the development of policy advice that works in practice (can be used by policy professionals in their day-to-day work) and aligns with theory (can be taught as part of explaining the dynamics of a wider policy advisory system). The 5D Model defines five key domains of inquiry: understanding Demand, being open to Discovery, undertaking Design, identifying critical Decision points, and shaping advice to enable Delivery. Our goal is a ‘repeatable, scalable’ model for supporting policy practitioners to provide quality advice to decision makers. The model was developed and tested through an extensive process of engagement with senior policy practitioners who noted the heuristic gave structure to practices that determine how policy advice is organized and formulated. Academic colleagues confirmed the utility of the model for explaining and teaching how policy is designed and delivered within the context of a wider policy advisory system (PAS). A unique aspect of this work was the collaboration and shared interest amongst academics and practitioners to define a model that is ‘useful for teaching’ and ‘useful for doing’…(More)”.

Exit to Open


Article by Jim Fruchterman and Steve Francis: “What happens when a nonprofit program or an entire organization needs to shut down? The communities being served, and often society as a whole, are the losers. What if it were possible to mitigate some of that damage by sharing valuable intellectual property assets of the closing effort for longer term benefit? Organizations in these tough circumstances must give serious thought to a responsible exit for their intangible assets.

At the present moment of unparalleled disruption, the entire nonprofit sector is rethinking everything: language to describe their work, funding sources, partnerships, and even their continued existence. Nonprofit programs and entire charities will be closing, or being merged out of existence. Difficult choices are being made. Who will fill the role of witness and archivist to preserve the knowledge of these organizations, their writings, media, software, and data, for those who carry on, either now or in the future?

We believe leaders in these tough days should consider a model we’re calling Exit to Open (E2O) and related exit concepts to safeguard these assets going forward…

Exit to Open (E2O) exploits three elements:

  1. We are in an era where the cost of digital preservation is low; storing a few more bytes for a long time is cheap.
  2. It’s far more effective for an organization’s staff to isolate and archive critical content than an outsider with limited knowledge attempting to do so later.
  3. These resources are of greatest use if there is a human available to interpret them, and a deliberate archival process allows for the identification of these potential interpreters…(More)”.

Hundreds of scholars say U.S. is swiftly heading toward authoritarianism


Article by Frank Langfitt: “A survey of more than 500 political scientists finds that the vast majority think the United States is moving swiftly from liberal democracy toward some form of authoritarianism.

In the benchmark survey, known as Bright Line Watch, U.S.-based professors rate the performance of American democracy on a scale from zero (complete dictatorship) to 100 (perfect democracy). After President Trump’s election in November, scholars gave American democracy a rating of 67. Several weeks into Trump’s second term, that figure plummeted to 55.

“That’s a precipitous drop,” says John Carey, a professor of government at Dartmouth and co-director of Bright Line Watch. “There’s certainly consensus: We’re moving in the wrong direction.”…Not all political scientists view Trump with alarm, but many like Carey who focus on democracy and authoritarianism are deeply troubled by Trump’s attempts to expand executive power over his first several months in office.

“We’ve slid into some form of authoritarianism,” says Steven Levitsky, a professor of government at Harvard, and co-author of How Democracies Die. “It is relatively mild compared to some others. It is certainly reversible, but we are no longer living in a liberal democracy.”…Kim Lane Scheppele, a Princeton sociologist who has spent years tracking Hungary, is also deeply concerned: “We are on a very fast slide into what’s called competitive authoritarianism.”

When these scholars use the term “authoritarianism,” they aren’t talking about a system like China’s, a one-party state with no meaningful elections. Instead, they are referring to something called “competitive authoritarianism,” the kind scholars say they see in countries such as Hungary and Turkey.

In a competitive authoritarian system, a leader comes to power democratically and then erodes the system of checks and balances. Typically, the executive fills the civil service and key appointments — including the prosecutor’s office and judiciary — with loyalists. He or she then attacks the media, universities and nongovernmental organizations to blunt public criticism and tilt the electoral playing field in the ruling party’s favor…(More)”.

Who Owns Science?


Article by Lisa Margonelli: “Only a few months into 2025, the scientific enterprise is reeling from a series of shocks—mass firings of the scientific workforce across federal agencies, cuts to federal research budgets, threats to indirect costs for university research, proposals to tax endowments, termination of federal science advisory committees, and research funds to prominent universities held hostage over political conditions. Amid all this, the public has not shown much outrage at—or even interest in—the dismantling of the national research project that they’ve been bankrolling for the past 75 years.

Some evidence of a disconnect from the scientific establishment was visible in confirmation hearings of administration appointees. During his Senate nomination hearing to head the department of Health and Human Services, Robert F. Kennedy Jr. promised a reorientation of research from infectious disease toward chronic conditions, along with “radical transparency” to rebuild trust in science. While his fans applauded, he insisted that he was not anti-vaccine, declaring, “I am pro-safety.”

But lack of public reaction to funding cuts need not be pinned on distrust of science; it could simply be that few citizens see the $200-billion-per-year, envy-of-the-world scientific enterprise as their own. On March 15, Alabama meteorologist James Spann took to Facebook to narrate the approach of 16 tornadoes in the state, taking note that people didn’t seem to care about the president’s threat to close the National Weather Service. “People say, ‘Well, if they shut it down, I’ll just use my app,’” Spann told Inside Climate News. “Well, where do you think the information on your app comes from? It comes from computer model output that’s run by the National Weather Service.” The public has paid for those models for generations, but only a die-hard weather nerd can find the acronyms for the weather models that signal that investment on these apps…(More)”.

For sale: Data on US servicemembers — and lots of it


Article by Alfred Ng: “Active-duty members of the U.S. military are vulnerable to having their personal information collected, packaged and sold to overseas companies without any vetting, according to a new report funded by the U.S. Military Academy at West Point.

The report highlights a significant American security risk, according to military officials, lawmakers and the experts who conducted the research, and who say the data available on servicemembers exposes them to blackmail based on their jobs and habits.

It also casts a spotlight on the practices of data brokers, a set of firms that specialize in scraping and packaging people’s digital records such as health conditions and credit ratings.

“It’s really a case of being able to target people based on specific vulnerabilities,” said Maj. Jessica Dawson, a research scientist at the Army Cyber Institute at West Point who initiated the study.

Data brokers gather government files, publicly available information and financial records into packages they can sell to marketers and other interested companies. As the practice has grown into a $214 billion industry, it has raised privacy concerns and come under scrutiny from lawmakers in Congress and state capitals.

Worried it could also present a risk to national security, the U.S. Military Academy at West Point funded the study from Duke University to see how servicemembers’ information might be packaged and sold.

Posing as buyers in the U.S. and Singapore, Duke researchers contacted multiple data-broker firms who listed datasets about active-duty servicemembers for sale. Three agreed and sold datasets to the researchers while two declined, saying the requests came from companies that didn’t meet their verification standards.

In total, the datasets contained information on nearly 30,000 active-duty military personnel. They also purchased a dataset on an additional 5,000 friends and family members of military personnel…(More)”

AI models could help negotiators secure peace deals


The Economist: “In a messy age of grinding wars and multiplying tariffs, negotiators are as busy as the stakes are high. Alliances are shifting and political leaders are adjusting—if not reversing—positions. The resulting tumult is giving even seasoned negotiators trouble keeping up with their superiors back home. Artificial-intelligence (AI) models may be able to lend a hand.

Some such models are already under development. One of the most advanced projects, dubbed Strategic Headwinds, aims to help Western diplomats in talks on Ukraine. Work began during the Biden administration in America, with officials on the White House’s National Security Council (NSC) offering guidance to the Centre for Strategic and International Studies (CSIS), a think-tank in Washington that runs the project. With peace talks under way, CSIS has speeded up its effort. Other outfits are doing similar work.

The CSIS programme is led by a unit called the Futures Lab. This team developed an AI language model using software from Scale AI, a firm based in San Francisco, and unique training data. The lab designed a tabletop strategy game called “Hetman’s Shadow” in which Russia, Ukraine and their allies hammer out deals. Data from 45 experts who played the game were fed into the model. So were media analyses of issues at stake in the Russia-Ukraine war, as well as answers provided by specialists to a questionnaire about the relative values of potential negotiation trade-offs. A database of 374 peace agreements and ceasefires was also poured in.

Thus was born, in late February, the first iteration of the Ukraine-Russia Peace Agreement Simulator. Users enter preferences for outcomes grouped under four rubrics: territory and sovereignty; security arrangements; justice and accountability; and economic conditions. The AI model then cranks out a draft agreement. The software also scores, on a scale of one to ten, the likelihood that each of its components would be satisfactory, negotiable or unacceptable to Russia, Ukraine, America and Europe. The model was provided to government negotiators from those last three territories, but a limited “dashboard” version of the software can be run online by interested members of the public…(More)”.

DOGE’s Growing Reach into Personal Data: What it Means for Human Rights


Article by Deborah Brown: “Expansive interagency sharing of personal data could fuel abuses against vulnerable people and communities who are already being targeted by Trump administration policies, like immigrants, lesbian, gay, bisexual, and transgender (LGBT) people, and student protesters. The personal data held by the government reveals deeply sensitive information, such as people’s immigration status, race, gender identity, sexual orientation, and economic status.

A massive centralized government database could easily be used for a range of abusive purposes, like to discriminate against current federal employees and future job applicants on the basis of their sexual orientation or gender identity, or to facilitate the deportation of immigrants. It could result in people forgoing public services out of fear that their data will be weaponized against them by another federal agency.

But the danger doesn’t stop with those already in the administration’s crosshairs. The removal of barriers keeping private data siloed could allow the government or DOGE to deny federal loans for education or Medicaid benefits based on unrelated or even inaccurate data. It could also facilitate the creation of profiles containing all of the information various agencies hold on every person in the country. Such profiles, combined with social media activity, could facilitate the identification and targeting of people for political reasons, including in the context of elections.

Information silos exist for a reason. Personal data should be collected for a determined, specific, and legitimate purpose, and not used for another purpose without notice or justification, according to the key internationally recognized data protection principle, “purpose limitation.” Sharing data seamlessly across federal or even state agencies in the name of an undefined and unmeasurable goal of efficiency is incompatible with this core data protection principle…(More)”.

Code Shift: Using AI to Analyze Zoning Reform in American Cities


Report by Arianna Salazar-Miranda & Emily Talen: “Cities are at the forefront of addressing global sustainability challenges, particularly those exacerbated by climate change. Traditional zoning codes, which often segregate land uses, have been linked to increased vehicular dependence, urban sprawl and social disconnection, undermining broader social and environmental sustainability objectives. This study investigates the adoption and impact of form-based codes (FBCs), which aim to promote sustainable, compact and mixed-use urban forms as a solution to these issues. Using natural language processing techniques, we analyzed zoning documents from over 2,000 United States census-designated places to identify linguistic patterns indicative of FBC principles. Our fndings reveal widespread adoption of FBCs across the country, with notable variations within regions. FBCs are associated with higher foor to area ratios, narrower and more consistent street setbacks and smaller plots. We also fnd that places with FBCs have improved walkability, shorter commutes and a higher share of multifamily housing. Our fndings highlight the utility of natural language processing for evaluating zoning codes and underscore the potential benefts of form-based zoning reforms for enhancing urban sustainability…(More)”.

Artificial Intelligence and the Future of Work


Report by National Academies of Sciences, Engineering, and Medicine: “Advances in artificial intelligence (AI) promise to improve productivity significantly, but there are many questions about how AI could affect jobs and workers.

Recent technical innovations have driven the rapid development of generative AI systems, which produce text, images, or other content based on user requests – advances which have the potential to complement or replace human labor in specific tasks, and to reshape demand for certain types of expertise in the labor market.

Artificial Intelligence and the Future of Work evaluates recent advances in AI technology and their implications for economic productivity, the workforce, and education in the United States. The report notes that AI is a tool with the potential to enhance human labor and create new forms of valuable work – but this is not an inevitable outcome. Tracking progress in AI and its impacts on the workforce will be critical to helping inform and equip workers and policymakers to flexibly respond to AI developments…(More)”.