“R&D” Means Something Different on Capitol Hill


Article by Sheril Kirshenbaum: “My first morning as a scientist-turned-Senate-staffer began with a misunderstanding that would become a metaphor for my impending immersion into the complex world of policymaking. When my new colleagues mentioned “R&D,” I naively assumed they were discussing critical topics related to research and development. After 10 or so confused minutes, I realized they were referring to Republicans and Democrats—my first lesson in the distinctive language and unique dynamics of congressional work. The “R&D” at the center of their world was vastly different than that of mine.In the 20 years since, I’ve moved between academic science positions and working on science policy in the Senate, under both Republican and Democratic majorities. My goal during these two decades has remained the same—to promote evidence-based policymaking that advances science and serves the public, regardless of the political landscape. But the transition from scientist to staffer has transformed my understanding of why so many efforts by scientists to influence policy falter. Despite generations of scholarly research to understand how information informs political decisions, scientists and other academics consistently overlook a crucial part of the process: the role of congressional staffers.

The staff hierarchy shapes how scientific information flows to elected officials. Chiefs of staff manage office operations and serve as the member’s closest advisors. Legislative directors oversee all policy matters, while legislative assistants (LAs) handle specific issue portfolios. One or two LAs may be designated as the office “science people,” although they often lack formal scientific training. Committee staffers provide deeper expertise and institutional knowledge on topics within their jurisdiction. In this ecosystem, few dedicated science positions exist, and science-related topics are distributed among staff already juggling multiple responsibilities…(More)”

Farmers win legal fight to bring climate resources back to federal websites


Article by Justine Calma: “After farmers filed suit, the US Department of Agriculture (USDA) has agreed to restore climate information to webpages it took down soon after President Donald Trump took office this year.

The US Department of Justice filed a letter late last night on behalf of the USDA that says the agency “will restore the climate-change-related web content that was removed post-inauguration, including all USDA webpages and interactive tools” that were named in the plaintiffs’ complaint. It says the work is already “underway” and should be mostly done in about two weeks.

If the Trump administration fulfills that commitment, it’ll be a significant victory for farmers and other Americans who rely on scientific data that has disappeared from federal websites since January…(More)”.

Indiana Faces a Data Center Backlash


Article by Matthew Zeitlin: “Indiana has power. Indiana has transmission. Indiana has a business-friendly Republican government. Indiana is close to Chicago but — crucially — not in Illinois. All of this has led to a huge surge of data center development in the “Crossroads of America.” It has also led to an upswell of local opposition.

There are almost 30 active data center proposals in Indiana, plus five that have already been rejected in the past year, according to data collected by the environmentalist group Citizens Action Coalition. GoogleAmazon, and Meta have all announced projects in the state since the beginning of 2024.

Nipsco, one of the state’s utilities, has projected 2,600 megawatts worth of new load by the middle of the next decade as its base scenario, mostly attributable to “large economic development projects.” In a more aggressive scenario, it sees 3,200 megawatts of new load — that’s three large nuclear reactors’ worth — by 2028 and 8,600 megawatts by 2035. While short of, say, the almost 36,500 megawatts worth of load growth planned in Georgia for the next decade, it’s still a vast range of outcomes that requires some kind of advanced planning.

That new electricity consumption will likely be powered by fossil fuels. Projected load growth in the state has extended a lifeline to Indiana’s coal-fired power plants, with retirement dates for some of the fleet being pushed out to late in the 2030s. It’s also created a market for new natural gas-fired plants that utilities say are necessary to power the expected new load.

State and local political leaders have greeted these new data center projects with enthusiasm, Ben Inskeep, the program director at CAC, told me. “Economic development is king here,” he said. “That is what all the politicians and regulators say their number one concern is: attracting economic development.”..(More)”.

The Importance of Co-Designing Questions: 10 Lessons from Inquiry-Driven Grantmaking


Article by Hannah Chafetz and Stefaan Verhulst: “How can a question-based approach to philanthropy enable better learning and deeper evaluation across both sides of the partnership and help make progress towards long-term systemic change? That’s what Siegel Family Endowment (Siegel), a family foundation based in New York City, sought to answer by creating an Inquiry-Driven Grantmaking approach

While many philanthropies continue to follow traditional practices that focus on achieving a set of strategic objectives, Siegel employs an inquiry-driven approach, which focuses on answering questions that can accelerate insights and iteration across the systems they seek to change. By framing their goal as “learning” rather than an “outcome” or “metric,” they aim to generate knowledge that can be shared across the whole field and unlock impact beyond the work on individual grants. 

The Siegel approach centers on co-designing and iteratively refining questions with grantees to address evolving strategic priorities, using rapid iteration and stakeholder engagement to generate insights that inform both grantee efforts and the foundation’s decision-making.

Their approach was piloted in 2020, and refined and operationalized the years that followed. As of 2024, it was applied across the vast majority of their grantmaking portfolio. Laura Maher, Chief of Staff and Director of External Engagement at Siegel Family Endowment, notes: “Before our Inquiry-Driven Grantmaking approach we spent roughly 90% of our time on the grant writing process and 10% checking in with grantees, and now that’s balancing out more.”

Screenshot 2025 05 08 at 4.29.24 Pm

Image of the Inquiry-Driven Grantmaking Process from the Siegel Family Endowment

Earlier this year, the DATA4Philanthropy team conducted two in-depth discussions with Siegel’s Knowledge and Impact team to discuss their Inquiry-Driven Grantmaking approach and what they learned thus far from applying their new methodology. While the Siegel team notes that there is still much to be learned, there are several takeaways that can be applied to others looking to initiate a questions-led approach. 

Below we provide 10 emerging lessons from these discussions…(More)”.

Building Community-Centered AI Collaborations


Article by Michelle Flores Vryn and Meena Das: “AI can only boost the under-resourced nonprofit world if we design it to serve the communities we care about. But as nonprofits consider how to incorporate AI into their work, many look to expertise from tech sector, expecting tools and implementation advice as well as ethical guidance. Yet when mission-driven entities—with a strong focus on people, communities, and equity—partner solely with tech companies, they may encounter a variety of obstacles, such as:

  1. Limited understanding of community needs: Sector-specific knowledge is essential for aligning AI with nonprofit missions, something many tech companies lack.
  2. Bias in AI models: Without diverse input, AI models may exacerbate biases or misrepresent the communities that nonprofits serve.
  3. Resource constraints: Tech solutions often presume budgets or capacity beyond what nonprofits can bring to bear, creating a reliance on tools that fit the nonprofit context.

We need creative, diverse collaborations across various fields to ensure that technology is deployed in ways that align with nonprofit values, build trust, and serve the greater good. Seeking partners outside of the tech world helps nonprofits develop AI solutions that are context-aware, equitable, and resource-sensitive. Most importantly, nonprofit practitioners must deeply consider our ideal future state: What does an AI-empowered nonprofit sector look like when it truly centers human well-being, community agency, and ethical technology?

Imagining this future means not just reacting to emerging technology but proactively shaping its trajectory. Instead of simply adapting to AI’s capabilities, nonprofits should ask:

  • What problems do we truly need AI to solve?
  • Whose voices must be centered in AI decision-making?
  • How do we ensure AI remains a tool for empowerment rather than control?..(More)”.

Policy Implications of DeepSeek AI’s Talent Base


Brief by Amy Zegart and Emerson Johnston: “Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent. We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns. Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there. These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness…(More)”.

How Bad Is China’s Economy? The Data Needed to Answer Is Vanishing


Article by Rebecca Feng and Jason Douglas: “Not long ago, anyone could comb through a wide range of official data from China. Then it started to disappear. 

Land sales measures, foreign investment data and unemployment indicators have gone dark in recent years. Data on cremations and a business confidence index have been cut off. Even official soy sauce production reports are gone.

In all, Chinese officials have stopped publishing hundreds of data points once used by researchers and investors, according to a Wall Street Journal analysis. 

In most cases, Chinese authorities haven’t given any reason for ending or withholding data. But the missing numbers have come as the world’s second biggest economy has stumbled under the weight of excessive debt, a crumbling real-estate market and other troubles—spurring heavy-handed efforts by authorities to control the narrative.China’s National Bureau of Statistics stopped publishing some numbers related to unemployment in urban areas in recent years. After an anonymous user on the bureau’s website asked why one of those data points had disappeared, the bureau said only that the ministry that provided it stopped sharing the data.

The disappearing data have made it harder for people to know what’s going on in China at a pivotal time, with the trade war between Washington and Beijing expected to hit China hard and weaken global growth. Plunging trade with the U.S. has already led to production shutdowns and job cuts.

Getting a true read on China’s growth has always been tricky. Many economists have long questioned the reliability of China’s headline gross domestic product data, and concerns have intensified recently. Official figures put GDP growth at 5% last year and 5.2% in 2023, but some have estimated that Beijing overstated its numbers by as much as 2 to 3 percentage points. 

To get what they consider to be more realistic assessments of China’s growth, economists have turned to alternative sources such as movie box office revenues, satellite data on the intensity of nighttime lights, the operating rates of cement factories and electricity generation by major power companies. Some parse location data from mapping services run by private companies such as Chinese tech giant Baidu to gauge business activity. 

One economist said he has been assessing the health of China’s services sector by counting news stories about owners of gyms and beauty salons who abruptly close up and skip town with users’ membership fees…(More)”.

The Dangers of AI Nationalism and Beggar-Thy-Neighbour Policies


Paper by Susan Aaronson: “As they attempt to nurture and govern AI, some nations are acting in ways that – with or without direct intent – discriminate among foreign market actors. For example, some governments are excluding foreign firms from access to incentives for high-speed computing, or requiring local content in the AI supply chain, or adopting export controls for the advanced chips that power many types of AI. If policy makers in country X can limit access to the building blocks of AI – whether funds, data or high-speed computing power – it might slow down or limit the AI prowess of its competitors in country Y and/or Z. At the same time, however, such policies could violate international trade norms of non-discrimination. Moreover, if policy makers can shape regulations in ways that benefit local AI competitors, they may also impede the competitiveness of other nations’ AI developers. Such regulatory policies could be discriminatory and breach international trade rules as well as long-standing rules about how nations and firms compete – which, over time, could reduce trust among nations. In this article, the author attempts to illuminate AI nationalism and its consequences by answering four questions:

– What are nations doing to nurture AI capacity within their borders?

Are some of these actions trade distorting?

 – Are some nations adopting twenty-first century beggar thy neighbour policies?

– What are the implications of such trade-distorting actions?

The author finds that AI nationalist policies appear to help countries with the largest and most established technology firms across multiple levels of the AI value chain. Hence, policy makers’ efforts to dominate these sectors, as example through large investment sums or beggar thy neighbour policies are not a good way to build trust…(More)”.

Balancing Data Sharing and Privacy to Enhance Integrity and Trust in Government Programs


Paper by National Academy of Public Administration: “Improper payments and fraud cost the federal government hundreds of billions of dollars each year, wasting taxpayer money and eroding public trust. At the same time, agencies are increasingly expected to do more with less. Finding better ways to share data, without compromising privacy, is critical for ensuring program integrity in a resource-constrained environment.

Key Takeaways

  • Data sharing strengthens program integrity and fraud prevention. Agencies and oversight bodies like GAO and OIGs have uncovered large-scale fraud by using shared data.
  • Opportunities exist to streamline and expedite the compliance processes required by privacy laws and reduce systemic barriers to sharing data across federal agencies.
  • Targeted reforms can address these barriers while protecting privacy:
    1. OMB could issue guidance to authorize fraud prevention as a routine use in System of Records Notices.
    2. Congress could enact special authorities or exemptions for data sharing that supports program integrity and fraud prevention.
    3. A centralized data platform could help to drive cultural change and support secure, responsible data sharing…(More)”

Glorious RAGs : A Safer Path to Using AI in the Social Sector


Blog by Jim Fruchterman: “Social sector leaders ask me all the time for advice on using AI. As someone who started for-profit machine learning (AI) companies in the 1980s, but then pivoted to running nonprofit social enterprises, I’m often the first person from Silicon Valley that many nonprofit leaders have met. I joke that my role is often that of “anti-consultant,” talking leaders out of doing an app, a blockchain (smile) or firing half their staff because of AI. Recently, much of my role has been tamping down the excessive expectations being bandied about for the impact of AI on organizations. However, two years into the latest AI fad wave created by ChatGPT and its LLM (large language model) peers, more and more of the leaders are describing eminently sensible applications of LLMs to their programs. The most frequent of these approaches can be described as variations on “Retrieval-Augmented Generation,” also known as RAG. I am quite enthusiastic about using RAG for social impact, because it addresses a real need and supplies guardrails for using LLMs effectively…(More)”