Explore our articles
View All Results

Stefaan Verhulst

Report by the Federation of American Scientists: “Local government and universities are critical to our communities. How do they work together? How can they support each other? How can we think differently about their relationship to one another – moving beyond big employers and land users to thinking about the fruits and labors of what the research community can do for local policy making.

The Civic Research Agenda is a multi-year, multi-partner study that is the first comprehensive reporting on the priority research needs of U.S. cities and counties. FAS has asked local governments directly about their research needs and pressing knowledge gaps that, if addressed, would help address their priority challenges and goals. It also provides an analysis of the supply side barriers (and recommendations) that will connect research to impact.

This report provides…

  1. research questions that are in demand by local governments; and
  2. specific recommendations for local governments and universities to improve and grow the research-to-impact pipeline for one simple purpose: make research actionable, understandable, and accessible to communities across the country…(More)”.
The Civic Research Agenda

Article by Northwestern Innovation Institute: “Universities produce a vast number of scientific publications each year. Yet only a small share ultimately leads to patents, startups, or broader industry adoption. The challenge is not a shortage of ideas, but limited visibility into which discoveries — and the researchers behind them — are most likely to move toward commercialization.

A new platform developed at the Northwestern Innovation Institute, called InnovationInsights, is designed to make that hidden potential visible.

Using artificial intelligence and large-scale research data, the system helps technology transfer offices identify faculty, papers, and emerging research areas with strong commercial promise — including many discoveries that would otherwise remain outside the innovation pipeline.

At the core of the platform is a searchable interface built around two levels of insight: researchers and their individual publications.

Users can explore researcher profiles that bring together key signals related to translational activity, including publication history, recent high-impact work, invention disclosures and whether a researcher’s papers have been cited by company patents. These profiles allow innovation teams to quickly identify faculty whose work is influencing industry or to show patterns associated with future commercialization.

At the publication level, InnovationInsights assigns each paper a commercial potential score based on machine-learning models trained on decades of historical data linking research outputs to downstream outcomes. Users can rank papers by this score to identify emerging discoveries that may be ready for translation, even before any patent activity occurs.

The platform also tracks citations from company patents, offering a direct view of where academic research is being used in industrial innovation. By comparing commercial potential scores with patent influence,institutions can see both future opportunity and current industry relevance…(More)”.

Finding the innovators hiding in plain sight

Paper by Huw Roberts, Mariarosaria Taddeo, and Luciano Floridi: “Efforts to develop global governance initiatives for artificial intelligence (AI) have increased significantly in recent years. However, these initiatives have generally had a limited impact due to their vagueness, lack of authority and repetition. Several factors contribute to the difficulties in establishing effective global AI governance mechanisms, including geopolitical tensions, institutional gridlock, and the general-purpose and sociotechnical characteristics of AI. Developing politically legitimate governance mechanisms that can operate within these constraints and effectively compel behaviour change among government and industry actors is essential for building a more mature global AI governance ecosystem. In this article, we contribute to this aim by introducing a framework for evaluating the political legitimacy of global governance initiatives. It is designed to clarify why many global AI governance initiatives lack authority and to identify opportunities for more impactful international cooperation. We operationalise the framework by assessing global AI governance initiatives which address two international security problems: establishing regulation for lethal autonomous weapon systems and implementing safety testing for general-purpose AI…(More)”.

A Framework for Evaluating Global AI Governance Initiatives

Article by Leif Weatherby and Benjamin Recht: “A recent Axios story on maternal health policy referenced “findings” that a majority of people trusted their doctors and nurses. On the surface, there’s nothing unusual about that. What wasn’t originally mentioned, however, was that these findings were made up.

Clicking through the links revealed (as did a subsequent editor’s note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.

The practice Aaru used is called silicon sampling, and it’s suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

Phone polling has become exponentially harder. Web polling is too uncertain. Silicon sampling removes the messy, costly part of asking people what they think…(More)”.

This Is What Will Ruin Public Opinion Polling for Good

Book by Marten Scheffer: “What kind of trouble lies ahead? How can we successfully transition towards a sustainable future? Drawing on a remarkably broad range of insights from complex systems and the functioning of the brain to the history of civilizations and the workings of modern societies, the distinguished scientist Marten Scheffer addresses these key questions of our times. He looks to the past to show how societies have tipped out of trouble before, the mechanisms that drive social transformations and the invisible hands holding us back. He traces how long-standing practices such as the slave trade and foot-binding were suddenly abandoned and how entire civilizations have collapsed to make way for something new. Could we be heading for a similarly dramatic change? Marten Scheffer argues that a dark future is plausible but not yet inevitable and he provides us instead with a hopeful roadmap to steer ourselves away from collapse-and toward renewal.

  • Provides a scientifically credible roadmap toward a sustainable future
  • Shows how tipping points work at a societal level and that individual actions, however small, bring us closer to a fundamental revision of society
  • Highlights the entrenched interests which hold back change including the role of large corporations and wealthy oligarchies..(More)”.
Tipping Out of Trouble: How Societies Transformed and How We Can Do So Again

OECD Report: “Artificial Intelligence (AI), when scaled responsibly, holds significant potential for healthcare systems. Yet significant barriers to its adoption remain, including fragmented data foundations, regulatory uncertainty, and gaps in governance and workforce capacity. Unleashing AI’s potential to benefit everyone’s health requires the balancing of market forces and health culture.

OECD Member countries are undertaking initiatives to address these gaps, such as establishing a strategies and action plans at the intersection of AI and health. To support these actions, a coherent policy checklist was developed to guide decision making and prioritisation and to avoid blind spots.

The checklist is organised into four pillars: establishing enablers (for data foundations, assuring and scaling AI, and capacity building); implementing guardrails (to oversee and monitor progress toward common objectives); engaging meaningfully with the public, providers and industry; and deploying trustworthy AI. Across the four pillars, nine main policy categories and 43 questions have emerged as critical for responsibly scaling the benefits of AI in health.

Action will be accelerated by learning from each other and solving challenges together. A shared recognition has emerged: that coherent, cross-border compatible policies are essential to balance innovation with safety, and economic opportunity with building public trust…(More)”.

Scaling Artificial Intelligence in Health

CRS Report: “Federal data can provide valuable information for various audiences—from farmers seeking to protect bats that eat crop-harming insects to local efforts determining where to rebuild to avoid coastal flooding. In 2013, the Office of Management and Budget (OMB) described openly available federal data and statistical information as “a valuable national resource and strategic asset” that, when made accessible, discoverable, and usable by the public, “can help fuel entrepreneurship, innovation, and scientific discovery.”

Efforts to make federal data more readily available have evolved over time. Such data may have been stored and filed in hard and paper copies and later in software and electronic formats. Today, certain data may be retrieved through agency websites or on Data.gov. Data.gov itself is a case study for open data, intended to demonstrate that making federal data available can help agencies avoid duplicative internal research, enable the discovery of complementary datasets held by other agencies, and empower employees to make better-informed, data-driven decisions, among other benefits.

Throughout 2025, media reports have suggested that the availability of federal data has been reduced. Some observers are also tracking the removal of specific datasets, variables, and tools. In parallel, changing public perspectives on data availability may demand new levels of data access, such as making data available for predictable periods of time, in a variety of software-compatible formats, and with appropriate descriptive metadata for easing findability and usability of the information. While statute discusses when and how information is to be added to Data.gov, it does not explain whether and how information may be removed. Although researchers and the public may derive value from being able to trace data over time to determine changes in trends or collection methods, the statute does not explicitly consider versioning requirements for agency data. However, requiring these attributes for Data.gov may help address or clarify difficulties in measuring data availability. Congress may be interested in determining whether there are trends to certain data becoming available or to when data is altered and removed. Such trends may provide insight and direction for Congress to further examine agency activities or make decisions to support new data use cases.

Information availability, of which data availability is a type, can be considered the intersection of when and how information is released. Section 3552 of Title 44 of the U.S. Code defines information availability as “ensuring timely and reliable access to and use of information.” Generally, statute and associated OMB guidance contemplates two types of information availability in terms of timing: (1) proactive disclosure and information dissemination and (2) request-based disclosure. Certain types of data have specific requirements in terms of formatting and structure to ensure that the information can be made available and potentially archived.

This report examines the variables of federal data availability and its policy underpinnings. The report discusses the state and concept of federal data availability and explains the information life cycle framework. It explains how information may be made available proactively or upon request through existing mechanisms and also explains statutory requirements for information dissemination, preservation, and whether and when information can be removed. The report concludes with policy options for Congress, including a review of efforts to preserve federal data through web captures; examining controls to assess data versioning, sourcing, and modifications; and, finally, considerations for implementing data governance and transparency mechanisms throughout agency structures…(More)”.

Availability of Federal Data: Policy Considerations for Disclosure, Preservation, and Governance

Article by Carl Zimmer: “Scientists publish more than 10 million studies and other publications a year. Some of those findings will add to humanity’s storehouse of knowledge. But some will be wrong.

To assess a study, scientists can replicate it to see if they get the same result. But seven years ago, a team of hundreds of scientists set out to find a faster way to judge new scientific literature. They built artificial intelligence systems to predict whether studies would hold up to scrutiny.

The project, funded by the Defense Advanced Research Projects Agency, or DARPA, was called Systematizing Confidence in Open Research and Evidence — SCORE, for short. The idea came from Adam Russell, then a program manager for the agency. He envisioned generating a kind of credit score for science.

“People can say, ‘Hey, this is likely to be robust, we can premise a policy on it,’” said Dr. Russell, who is now at the University of Southern California. “‘But this? Nah, this might make for a book in the airport.’”

The SCORE team inspected hundreds of studies, running many of them again, to better understand what makes research hold up. Now it is publishing a raft of papers on those efforts.

For now, a scientific credit score remains a dream, the researchers say. Artificial intelligence cannot make reliable predictions…

For more than 15 years, some scientists have been trying to change the culture. They started by documenting the extent of the problem. In the early 2010s, Dr. Nosek and colleagues replicated 100 psychology papers — and matched the original results only 39 percent of the time.

In another project, Dr. Nosek teamed up with cancer biologists to replicate 50 experiments on animals and human cells. Fewer than half of the results withstood their scrutiny…(More)”.

Can Science Predict When a Study Won’t Hold Up?

Research Agenda & Bibliography of Proposals by Anna Lenhart: “In recent years, academics, advocates, and policymakers have proposed or discussed the need for a new digital regulator (NDR) – a new agency of the federal government that regulates the AI and technology industry, with a particular focus on market competition, data privacy, and transparency & safety. We have documented over 20 academic papers and studies, think tank reports, books and parts of books, essays and op-eds, and pieces of legislation that propose such agencies or analyze such proposals. 

On February 25, 2026, the Institute for Data, Democracy and Politics at George Washington University and the Vanderbilt Policy Accelerator hosted many of the experts who authored those proposals for a day-long summit to discuss the need for an NDR and open questions related to the design of the agency. Informed by those discussions, this research agenda outlines questions we believe still deserve additional research attention, across disciplines. We are publishing this agenda in hopes to inspire scholarly work on these issues. Some areas may already have work that we have inadvertently missed from our literature review, and we welcome input from those interested in these issues…(More)“.

Designing a New Digital Regulator

Paper by Anton Korinek & Joseph E. Stiglitz: “Rapid progress in new technologies such as AI has led to widespread anxiety about adverse labor market impacts. This paper asks how to guide innovative efforts so as to increase labor demand and create better-paying jobs while also evaluating the limitations of such an approach. We develop a theoretical framework to identify the properties that make an innovation desirable from the perspective of workers, including its technological complementarity to labor, the relative income of the affected workers, and the factor share of labor in producing the goods involved. Applications include robot taxation, factor-augmenting progress, and task automation. In our framework, the welfare benefits of steering technology are greater the less efficient social safety nets are. As technological progress devalues labor, the welfare benefits of steering are at first increased but, but beyond a critical threshold, decline and optimal policy shifts toward greater redistribution. Moreover, as labor’s economic value diminishes, steering progress focuses increasingly on enhancing human well-being rather than labor productivity…(More)”.

Steering Technological Progress

Get the latest news right in your inbox

Subscribe to curated findings and actionable knowledge from The Living Library, delivered to your inbox every Friday