How the war on drunk driving was won


Blog by Nick Cowen: “…Viewed from the 1960s it might have seemed like ending drunk driving would be impossible. Even in the 1980s, the movement seemed unlikely to succeed and many researchers questioned whether it constituted a social problem at all.

Yet things did change: in 1980, 1,450 fatalities were attributed to drunk driving accidents in the UK. In 2020, there were 220. Road deaths in general declined much more slowly, from around 6,000 in 1980 to 1,500 in 2020. Drunk driving fatalities dropped overall and as a percentage of all road deaths.

The same thing happened in the United States, though not to quite the same extent. In 1980, there were around 28,000 drunk driving deaths there, while in 2020, there were 11,654. Despite this progress, drunk driving remains a substantial public threat, comparable in scale to homicide (of which in 2020 there were 594 in Britain and 21,570 in America).

Of course, many things have happened in the last 40 years that contributed to this reduction. Vehicles are better designed to prioritize life preservation in the event of a collision. Emergency hospital care has improved so that people are more likely to survive serious injuries from car accidents. But, above all, driving while drunk has become stigmatized.

This stigma didn’t come from nowhere. Governments across the Western world, along with many civil society organizations, engaged in hard-hitting education campaigns about the risks of drunk driving. And they didn’t just talk. Tens of thousands of people faced criminal sanctions, and many were even put in jail.

Two underappreciated ideas stick out from this experience. First, deterrence works: incentives matter to offenders much more than many scholars found initially plausible. Second, the long-run impact that successful criminal justice interventions have is not primarily in rehabilitation, incapacitation, or even deterrence, but in altering the social norms around acceptable behavior…(More)”.

AI-enabled Peacekeeping Tech for the Digital Age


Springwise: “There are countless organisations and government agencies working to resolve conflicts around the globe, but they often lack the tools to know if they are making the right decisions. Project Didi is developing those technological tools – helping peacemakers plan appropriately and understand the impact of their actions in real time.

Project Didi Co-founder and CCO Gabe Freund explained to Springwise that the project uses machine learning, big data, and AI to analyse conflicts and “establish a new standard for best practice when it comes to decision-making in the world of peacebuilding.”

In essence, the company is attempting to analyse the many factors that are involved in conflict in order to identify a ‘ripe moment’ when both parties will be willing to negotiate for peace. The tools can track the impact and effect of all actors across a conflict. This allows them to identify and create connections between organisations and people who are doing similar work, amplifying their effects…(More)” See also: Project Didi (Kluz Prize)

Sorting the Self


Article by Christopher Yates: “We are unknown to ourselves, we knowers…and there is good reason for this. We have never looked for ourselves—so how are we ever supposed to find ourselves?” Much has changed since the late nineteenth century, when Nietzsche wrote those words. We now look obsessively for ourselves, and we find ourselves in myriad ways. Then we find more ways of finding ourselves. One involves a tool, around which grew a science, from which bloomed a faith, and from which fell the fruits of dogma. That tool is the questionnaire. The science is psychometrics. And the faith is a devotion to self-codification, of which the revelation of personality is the fruit.

Perhaps, whether on account of psychological evaluation and therapy, compulsory corporate assessments, spiritual direction endeavors, or just a sporting interest, you have had some experience of this phenomenon. Perhaps it has served you well. Or maybe you have puzzled over the strange avidity with which we enable standardized tests and the technicians or portals that administer them to gauge the meaning of our very being. Maybe you have been relieved to discover that, according to the 16 Personality Types assessments, you are an ISFP; or, according to the Enneagram, you are a 3 with a 2 or 4 wing. Or maybe you have been somewhat troubled by how this peculiar term personality, derived as it is from the Latin persona (meaning the masks once worn by players on stage), has become a repository of so many adjectives—one that violates Aristotle’s cardinal metaphysical rule against reducing a substance to its properties.

Either way, the self has never been more securely an object of classification than it is today, thanks to the century-long ascendence of behavioral analysis and scientific psychology, sociometry, taxonomic personology, and personality theory. Add to these the assorted psychodiagnostic instruments drawing on refinements of multiple regression analysis, and multivariate and circumplex modeling, trait determination and battery-based assessments, and the ebbs and flows of psychoanalytic theory. Not to be overlooked, of course, is the popularizing power of evidence-based objective and predictive personality profiling inside and outside the laboratory and therapy chambers since Katherine Briggs began envisioning what would become the fabled person-sorting Myers-Briggs Type Indicator (MBTI) in 1919. A handful of phone calls, psychological referrals, job applications, and free or modestly priced hyperlinked platforms will place before you (and the eighty million or more other Americans who take these tests annually) more than two thousand personality assessments promising to crack your code. Their efficacy has become an object of our collective speculation. And by many accounts, their revelations make us not only known but also more empowered to live healthy and fulfilling lives. Nietzsche had many things, but he did not have PersonalityMax.com or PersonalityAssessor.com…(More)”.

When Online Content Disappears


Pew Research: “The internet is an unimaginably vast repository of modern life, with hundreds of billions of indexed webpages. But even as users across the world rely on the web to access books, images, news articles and other resources, this content sometimes disappears from view…

  • A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
A line chart showing that 38% of webpages from 2013 are no longer accessible
  • For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This “digital decay” occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the “References” section of Wikipedia pages as of spring 2023. This analysis found that:

  • 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
  • 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists...(More)”.

Defining AI incidents and related terms


OECD Report: “As AI use grows, so do its benefits and risks. These risks can lead to actual harms (“AI incidents”) or potential dangers (“AI hazards”). Clear definitions are essential for managing and preventing these risks. This report proposes definitions for AI incidents and related terms. These definitions aim to foster international interoperability while providing flexibility for jurisdictions to determine the scope of AI incidents and hazards they wish to address…(More)”.

Dynamic Collective Action and the Power of Large Numbers


Paper by Marco Battaglini & Thomas R. Palfrey: “Collective action is a dynamic process where individuals in a group assess over time the benefits and costs of participating toward the success of a collective goal. Early participation improves the expectation of success and thus stimulates the subsequent participation of other individuals who might otherwise be unwilling to engage. On the other hand, a slow start can depress expectations and lead to failure for the group. Individuals have an incentive to procrastinate, not only in the hope of free riding, but also in order to observe the flow of participation by others, which allows them to better gauge whether their own participation will be useful or simply wasted. How do these phenomena affect the probability of success for a group? As the size of the group increases, will a “power of large numbers” prevail producing successful outcomes, or will a “curse of large numbers” lead to failure? In this paper, we address these questions by studying a dynamic collective action problem in which n individuals can achieve a collective goal if a share of them takes a costly action (e.g., participate in a protest, join a picket line, or sign an environmental agreement). Individuals have privately known participation costs and decide over time if and when to participate. We characterize the equilibria of this game and show that under general conditions the eventual success of collective action is necessarily probabilistic. The process starts for sure, and hence there is always a positive probability of success; however, the process “gets stuck” with positive probability, in the sense that participation stops short of the goal. Equilibrium outcomes have a simple characterization in large populations: welfare converges to either full efficiency or zero as n→∞ depending on a precise condition on the rate at which the share required for success converges to zero. Whether success is achievable or not, delays are always irrelevant: in the limit, success is achieved either instantly or never…(More)”

On the Meaning of Community Consent in a Biorepository Context


Article by Astha Kapoor, Samuel Moore, and Megan Doerr: “Biorepositories, vital for medical research, collect and store human biological samples and associated data for future use. However, our reliance solely on the individual consent of data contributors for biorepository data governance is becoming inadequate. Big data analysis focuses on large-scale behaviors and patterns, shifting focus from singular data points to identifying data “journeys” relevant to a collective. The individual becomes a small part of the analysis, with the harms and benefits emanating from the data occurring at an aggregated level.

Community refers to a particular qualitative aspect of a group of people that is not well captured by quantitative measures in biorepositories. This is not an excuse to dodge the question of how to account for communities in a biorepository context; rather, it shows that a framework is needed for defining different types of community that may be approached from a biorepository perspective. 

Engaging with communities in biorepository governance presents several challenges. Moving away from a purely individualized understanding of governance towards a more collectivizing approach necessitates an appreciation of the messiness of group identity, its ephemerality, and the conflicts entailed therein. So while community implies a certain degree of homogeneity (i.e., that all members of a community share something in common), it is important to understand that people can simultaneously consider themselves a member of a community while disagreeing with many of its members, the values the community holds, or the positions for which it advocates. The complex nature of community participation therefore requires proper treatment for it to be useful in a biorepository governance context…(More)”.

Multiple Streams and Policy Ambiguity


Book by Rob A. DeLeo, Reimut Zohlnhöfer and Nikolaos Zahariadis: “The last decade has seen a proliferation of research bolstering the theoretical and methodological rigor of the Multiple Streams Framework (MSF), one of the most prolific theories of agenda-setting and policy change. This Element sets out to address some of the most prominent criticisms of the theory, including the lack of empirical research and the inconsistent operationalization of key concepts, by developing the first comprehensive guide for conducting MSF research. It begins by introducing the MSF, including key theoretical constructs and hypotheses. It then presents the most important theoretical extensions of the framework and articulates a series of best practices for operationalizing, measuring, and analyzing MSF concepts. It closes by exploring existing gaps in MSF research and articulating fruitful areas of future research…(More)”.

How Open-Source Software Empowers Nonprofits And The Global Communities They Serve


Article by Steve Francis: “One particular area where this challenge is evident is climate. Thousands of nonprofits strive to address the effects of a changing climate and its impact on communities worldwide. Headlines often go to big organizations doing high-profile work (planting trees, for instance) in well-known places. Money goes to large-scale commercial agriculture or new technologies — because that’s where profits are most easily made. But thousands of other communities of small farmers that aren’t as visible or profitable need help too. These communities come together to tackle a number of interrelated problems: climate, soil health and productivity, biodiversity and human health and welfare. They envision a more sustainable future.

The reality is that software is crafted to meet market needs, but these communities don’t represent a profitable market. Every major industry has its own software applications and a network of consultants to tune that software for optimal performance. A farm cooperative in less developed parts of the world seeking to maximize value for sustainably harvested produce faces very different challenges than do any of these business users. Often they need to collect and manipulate data in the field, on whatever mobile device they have, with little or no connectivity. Modern software systems are rarely designed to operate in such an environment; they assume the latest devices and continuous connectivity…(More)”.

Routledge Handbook of Risk, Crisis, and Disaster Communication


Book edited by Brooke Fisher Liu, and Amisha M. Mehta: “With contributions from leading academic experts and practitioners from diverse disciplinary backgrounds including communication, disaster, and health, this Handbook offers a valuable synthesis of current knowledge and future directions for the field. It is divided into four parts. Part One begins with an introduction to foundational theories and pedagogies for risk and crisis communication. Part Two elucidates knowledge and gaps in communicating about climate and weather, focusing on community and corporate positions and considering text and visual communication with examples from the US and Australia. Part Three provides insights on communicating ongoing and novel risks, crises, and disasters from US and European perspectives, which cover how to define new risks and translate theories and methodologies so that their study can support important ongoing research and practice. Part Four delves into communicating with diverse publics and audiences with authors examining community, first responder, and employee perspectives within developed and developing countries to enhance our understanding and inspire ongoing research that is contextual, nuanced, and impactful. Offering innovative insights into ongoing and new topics, this handbook explores how the field of risk, crisis, and disaster communications can benefit from theory, technology, and practice…(More)”