When Online Content Disappears


Pew Research: “The internet is an unimaginably vast repository of modern life, with hundreds of billions of indexed webpages. But even as users across the world rely on the web to access books, images, news articles and other resources, this content sometimes disappears from view…

  • A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
A line chart showing that 38% of webpages from 2013 are no longer accessible
  • For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This “digital decay” occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the “References” section of Wikipedia pages as of spring 2023. This analysis found that:

  • 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
  • 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists...(More)”.

Defining AI incidents and related terms


OECD Report: “As AI use grows, so do its benefits and risks. These risks can lead to actual harms (“AI incidents”) or potential dangers (“AI hazards”). Clear definitions are essential for managing and preventing these risks. This report proposes definitions for AI incidents and related terms. These definitions aim to foster international interoperability while providing flexibility for jurisdictions to determine the scope of AI incidents and hazards they wish to address…(More)”.

Dynamic Collective Action and the Power of Large Numbers


Paper by Marco Battaglini & Thomas R. Palfrey: “Collective action is a dynamic process where individuals in a group assess over time the benefits and costs of participating toward the success of a collective goal. Early participation improves the expectation of success and thus stimulates the subsequent participation of other individuals who might otherwise be unwilling to engage. On the other hand, a slow start can depress expectations and lead to failure for the group. Individuals have an incentive to procrastinate, not only in the hope of free riding, but also in order to observe the flow of participation by others, which allows them to better gauge whether their own participation will be useful or simply wasted. How do these phenomena affect the probability of success for a group? As the size of the group increases, will a “power of large numbers” prevail producing successful outcomes, or will a “curse of large numbers” lead to failure? In this paper, we address these questions by studying a dynamic collective action problem in which n individuals can achieve a collective goal if a share of them takes a costly action (e.g., participate in a protest, join a picket line, or sign an environmental agreement). Individuals have privately known participation costs and decide over time if and when to participate. We characterize the equilibria of this game and show that under general conditions the eventual success of collective action is necessarily probabilistic. The process starts for sure, and hence there is always a positive probability of success; however, the process “gets stuck” with positive probability, in the sense that participation stops short of the goal. Equilibrium outcomes have a simple characterization in large populations: welfare converges to either full efficiency or zero as n→∞ depending on a precise condition on the rate at which the share required for success converges to zero. Whether success is achievable or not, delays are always irrelevant: in the limit, success is achieved either instantly or never…(More)”

On the Meaning of Community Consent in a Biorepository Context


Article by Astha Kapoor, Samuel Moore, and Megan Doerr: “Biorepositories, vital for medical research, collect and store human biological samples and associated data for future use. However, our reliance solely on the individual consent of data contributors for biorepository data governance is becoming inadequate. Big data analysis focuses on large-scale behaviors and patterns, shifting focus from singular data points to identifying data “journeys” relevant to a collective. The individual becomes a small part of the analysis, with the harms and benefits emanating from the data occurring at an aggregated level.

Community refers to a particular qualitative aspect of a group of people that is not well captured by quantitative measures in biorepositories. This is not an excuse to dodge the question of how to account for communities in a biorepository context; rather, it shows that a framework is needed for defining different types of community that may be approached from a biorepository perspective. 

Engaging with communities in biorepository governance presents several challenges. Moving away from a purely individualized understanding of governance towards a more collectivizing approach necessitates an appreciation of the messiness of group identity, its ephemerality, and the conflicts entailed therein. So while community implies a certain degree of homogeneity (i.e., that all members of a community share something in common), it is important to understand that people can simultaneously consider themselves a member of a community while disagreeing with many of its members, the values the community holds, or the positions for which it advocates. The complex nature of community participation therefore requires proper treatment for it to be useful in a biorepository governance context…(More)”.

Multiple Streams and Policy Ambiguity


Book by Rob A. DeLeo, Reimut Zohlnhöfer and Nikolaos Zahariadis: “The last decade has seen a proliferation of research bolstering the theoretical and methodological rigor of the Multiple Streams Framework (MSF), one of the most prolific theories of agenda-setting and policy change. This Element sets out to address some of the most prominent criticisms of the theory, including the lack of empirical research and the inconsistent operationalization of key concepts, by developing the first comprehensive guide for conducting MSF research. It begins by introducing the MSF, including key theoretical constructs and hypotheses. It then presents the most important theoretical extensions of the framework and articulates a series of best practices for operationalizing, measuring, and analyzing MSF concepts. It closes by exploring existing gaps in MSF research and articulating fruitful areas of future research…(More)”.

How Open-Source Software Empowers Nonprofits And The Global Communities They Serve


Article by Steve Francis: “One particular area where this challenge is evident is climate. Thousands of nonprofits strive to address the effects of a changing climate and its impact on communities worldwide. Headlines often go to big organizations doing high-profile work (planting trees, for instance) in well-known places. Money goes to large-scale commercial agriculture or new technologies — because that’s where profits are most easily made. But thousands of other communities of small farmers that aren’t as visible or profitable need help too. These communities come together to tackle a number of interrelated problems: climate, soil health and productivity, biodiversity and human health and welfare. They envision a more sustainable future.

The reality is that software is crafted to meet market needs, but these communities don’t represent a profitable market. Every major industry has its own software applications and a network of consultants to tune that software for optimal performance. A farm cooperative in less developed parts of the world seeking to maximize value for sustainably harvested produce faces very different challenges than do any of these business users. Often they need to collect and manipulate data in the field, on whatever mobile device they have, with little or no connectivity. Modern software systems are rarely designed to operate in such an environment; they assume the latest devices and continuous connectivity…(More)”.

Routledge Handbook of Risk, Crisis, and Disaster Communication


Book edited by Brooke Fisher Liu, and Amisha M. Mehta: “With contributions from leading academic experts and practitioners from diverse disciplinary backgrounds including communication, disaster, and health, this Handbook offers a valuable synthesis of current knowledge and future directions for the field. It is divided into four parts. Part One begins with an introduction to foundational theories and pedagogies for risk and crisis communication. Part Two elucidates knowledge and gaps in communicating about climate and weather, focusing on community and corporate positions and considering text and visual communication with examples from the US and Australia. Part Three provides insights on communicating ongoing and novel risks, crises, and disasters from US and European perspectives, which cover how to define new risks and translate theories and methodologies so that their study can support important ongoing research and practice. Part Four delves into communicating with diverse publics and audiences with authors examining community, first responder, and employee perspectives within developed and developing countries to enhance our understanding and inspire ongoing research that is contextual, nuanced, and impactful. Offering innovative insights into ongoing and new topics, this handbook explores how the field of risk, crisis, and disaster communications can benefit from theory, technology, and practice…(More)”

Building a trauma-informed algorithmic assessment toolkit


Report by Suvradip Maitra, Lyndal Sleep, Suzanna Fay, Paul Henman: “Artificial intelligence (AI) and automated processes provide considerable promise to enhance human wellbeing by fully automating or co-producing services with human service providers. Concurrently, if not well considered, automation also provides ways in which to generate harms at scale and speed. To address this challenge, much discussion to date has focused on principles of ethical AI and accountable algorithms with a groundswell of early work seeking to translate these into practical frameworks and processes to ensure such principles are enacted. AI risk assessment frameworks to detect and evaluate possible harms is one dominant approach, as are a growing body of AI audit frameworks, with concomitant emerging governmental and organisational regulatory settings, and associate professionals.

The research outlined in this report took a different approach. Building on work in social services on trauma-informed practice, researchers identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma. This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI…(More)”.

Predicting hotspots of unsheltered homelessness using geospatial administrative data and volunteered geographic information


Paper by Jessie Chien, Benjamin F. Henwood, Patricia St. Clair, Stephanie Kwack, and Randall Kuhn: “Unsheltered homelessness is an increasingly prevalent phenomenon in major cities that is associated with adverse health and mortality outcomes. This creates a need for spatial estimates of population denominators for resource allocation and epidemiological studies. Gaps in the timeliness, coverage, and spatial specificity of official Point-in-Time Counts of unsheltered homelessness suggest a role for geospatial data from alternative sources to provide interim, neighborhood-level estimates of counts and trends. We use citizen-generated data from homeless-related 311 requests, provider-based administrative data from homeless street outreach cases, and expert reports of unsheltered count to predict count and emerging hotspots of unsheltered homelessness in census tracts across the City of Los Angeles for 2019 and 2020. Our study shows that alternative data sources can contribute timely insights into the state of unsheltered homelessness throughout the year and inform the delivery of interventions to this vulnerable population…(More)”.

Applying Social and Behavioral Science to Federal Policies and Programs to Deliver Better Outcomes


The White House: “Human behavior is a key component of every major national and global challenge. Social and behavioral science examines if, when, and how people’s actions and interactions influence decisions and outcomes. Understanding human behavior through social and behavioral science is vitally important for creating federal policies and programs that open opportunities for everyone.

Today, the Biden-Harris Administration shares the Blueprint for the Use of Social and Behavioral Science to Advance Evidence-Based Policymaking. This blueprint recommends actions for agencies across the federal government to effectively leverage social and behavioral science in improving policymaking to deliver better outcomes and opportunities for people all across America. These recommendations include specific actions for agencies, such as considering social and behavioral insights early in policy or program development. The blueprint also lays out broader opportunities for agencies, such as ensuring agencies have a sufficient number of staff with social and behavioral science expertise.  

The blueprint includes nearly a hundred examples of how social and behavioral science is already used to make real progress on our highest priorities, including promoting safe, equitable, and engaged communities; protecting the environment and promoting climate innovation; advancing economic prosperity and the future of the workforce; enhancing the health outcomes of all Americans; rebuilding our infrastructure and building for tomorrow; and promoting national defense and international security. Social and behavioral science informs the conceptualization, development, implementation, dissemination, and evaluation of interventions, programs, and policies. Policymakers and social scientists can examine data about how government services reach people or measure the effectiveness of a program in assisting a particular community. Using this information, we can understand why programs sometimes fall short in delivering their intended benefits or why other programs are highly successful in delivering benefits. These approaches also help us design better policies and scale proven successful interventions to benefit the entire country…(More)”.