LOGIC: Good Practice Principles for Mainstreaming Behavioural Public Policy


OECD Report: “This report outlines good practice principles intended to encourage the incorporation of behavioural perspectives as part of standard policymaking practice in government and governmental organisations. Evidence from the behavioural sciences is potentially transformative in many areas of government policy and administration. The 14 good practice principles, organised into five dimensions, present a guide to the consistent production and application of useful behavioural science evidence. Governments and governmental organisations looking to mainstream behavioural public policy may use the good practice principles and case studies included in this report to assess their current policy systems and develop strategies to further improve them…(More)”

AI Chatbot Credited With Preventing Suicide. Should It Be?


Article by Samantha Cole: “A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.” …(More)”.

Towards a pan-EU Freedom of Information Act? Harmonizing Access to Information in the EU through the internal market competence


Paper by Alberto Alemanno and Sébastien Fassiaux: “This paper examines whether – and on what basis – the EU may harmonise the right of access to information across the Union. It does by examining the available legal basis established by relevant international obligations, such as those stemming from the Council of Europe, and EU primary law. Its demonstrates that neither the Council of Europe – through the European Convention of Human Rights and the more recent Trømso Convention – nor the EU – through Article 41 of the EU Charter of Fundamental Rights – do require the EU to enact minimum standards of access to information. That Charter’s provision combined with Articles 10 and 11 TEU do require instead only the EU institutions – not the EU Member States – to ensure public access to documents, including legislative texts and meeting minutes. Regulation 1049/2001 was adopted (originally Art. 255 TEC) on such a legal basis and should be revised accordingly. The paper demonstrates that the most promising legal basis enabling the EU to proceed towards the harmonisation of access to information within the EU is offered by Article 114 TFEU. It argues hat the harmonisation of the conditions governing access to information across Member States would facilitate cross-border activities and trade, thus enhancing the internal market. Moreover, this would ensure equal access to information for all EU citizens and residents, irrespective of their location within the EU. Therefore, the question is not whether but how the EU may – under Article 114 TFEU – act to harmonise access to information. If the EU enjoys wide legislative discretion under Article 114(1) TFEU, this is not absolute but is subject to limits derived from fundamental rights and principles such as proportionality, equality, and subsidiarity. Hence, the need to design the type of harmonisation capable of preserving existing national FOIAs while enhancing the weakest ones. The only type of harmonisation fit for purpose would therefore be minimal, as opposed to maximal, by merely defining the minimum conditions required on each Member State’s national legislation governing the access to information…(More)”.

Digital Media and Grassroots Anti-Corruption


Open access book edited by Alice Mattoni: “Delving into a burgeoning field of research, this enlightening book utilises case studies from across the globe to explore how digital media is used at the grassroots level to combat corruption. Bringing together an impressive range of experts, Alice Mattoni deftly assesses the design, creation and use of a wide range of anti-corruption technologies…(More)”.

Digitalization in Practice


Book edited by Jessamy Perriam and Katrine Meldgaard Kjær: “..shows that as welfare is increasingly digitalized, an investigation of the social implications of this digitalization becomes increasingly pertinent. The book offers chapters on how the state operates, from the day-to-day practices of governance to keeping registers of businesses, from overarching and sometimes contradictory policies to considering how to best include citizens in digitalized processes. Moreover, the book takes a citizen perspective on key issues of access, identification and social harm to consider the social implications of digitalization in the everyday. The diversity of topics in Digitalization in Practice reflects how digitalization as an ongoing process and practice fundamentally impacts and often reshapes the relationship between states and citizens.

  • Provides much needed critical perspectives on digital states in practice.
  • Opens up provocative questions for further studies and research topics in digital states.
  • Showcases empirical studies of situations where digital states are enacted…(More)”.

More Questions Than Flags: Reality Check on DSA’s Trusted Flaggers


Article by Ramsha Jahangir, Elodie Vialle and Dylan Moses: “It’s been 100 days since the Digital Services Act (DSA) came into effect, and many of us are still wondering how the Trusted Flagger mechanism is taking shape, particularly for civil society organizations (CSOs) that could be potential applicants.

With an emphasis on accountability and transparency, the DSA requires national coordinators to appoint Trusted Flaggers, who are designated entities whose requests to flag illegal content must be prioritized. “Notices submitted by Trusted Flaggers acting within their designated area of expertise . . . are given priority and are processed and decided upon without undue delay,” according to the DSA. Trusted flaggers can include non-governmental organizations, industry associations, private or semi-public bodies, and law enforcement agencies. For instance, a private company that focuses on finding CSAM or terrorist-type content, or tracking groups that traffic in that content, could be eligible for Trusted Flagger status under the DSA. To be appointed, entities need to meet certain criteria, including being independent, accurate, and objective.

Trusted escalation channels are a key mechanism for civil society organizations (CSOs) supporting vulnerable users, such as human rights defenders and journalists targeted by online attacks on social media, particularly in electoral contexts. However, existing channels could be much more efficient. The DSA is a unique opportunity to redesign these mechanisms for reporting illegal or harmful content at scale. They need to be rethought for CSOs that hope to become Trusted Flaggers. Platforms often require, for instance, content to be translated into English and context to be understood by English-speaking audiences (due mainly to the fact that the key decision-makers are based in the US), which creates an added burden for CSOs that are resource-strapped. The lack of transparency in the reporting process can be distressing for the victims for whom those CSOs advocate. The lack of timely response can lead to dramatic consequences for human rights defenders and information integrity. Several CSOs we spoke with were not even aware of these escalation channels – and platforms are not incentivized to promote mechanisms given the inability to vet, prioritize and resolve all potential issues sent to them….(More)”.

The Simple Macroeconomics of AI


Paper by Daron Acemoglu: “This paper evaluates claims about large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. So long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value…(More)”.

Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science


Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China. 

While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.

Sorting the Self


Article by Christopher Yates: “We are unknown to ourselves, we knowers…and there is good reason for this. We have never looked for ourselves—so how are we ever supposed to find ourselves?” Much has changed since the late nineteenth century, when Nietzsche wrote those words. We now look obsessively for ourselves, and we find ourselves in myriad ways. Then we find more ways of finding ourselves. One involves a tool, around which grew a science, from which bloomed a faith, and from which fell the fruits of dogma. That tool is the questionnaire. The science is psychometrics. And the faith is a devotion to self-codification, of which the revelation of personality is the fruit.

Perhaps, whether on account of psychological evaluation and therapy, compulsory corporate assessments, spiritual direction endeavors, or just a sporting interest, you have had some experience of this phenomenon. Perhaps it has served you well. Or maybe you have puzzled over the strange avidity with which we enable standardized tests and the technicians or portals that administer them to gauge the meaning of our very being. Maybe you have been relieved to discover that, according to the 16 Personality Types assessments, you are an ISFP; or, according to the Enneagram, you are a 3 with a 2 or 4 wing. Or maybe you have been somewhat troubled by how this peculiar term personality, derived as it is from the Latin persona (meaning the masks once worn by players on stage), has become a repository of so many adjectives—one that violates Aristotle’s cardinal metaphysical rule against reducing a substance to its properties.

Either way, the self has never been more securely an object of classification than it is today, thanks to the century-long ascendence of behavioral analysis and scientific psychology, sociometry, taxonomic personology, and personality theory. Add to these the assorted psychodiagnostic instruments drawing on refinements of multiple regression analysis, and multivariate and circumplex modeling, trait determination and battery-based assessments, and the ebbs and flows of psychoanalytic theory. Not to be overlooked, of course, is the popularizing power of evidence-based objective and predictive personality profiling inside and outside the laboratory and therapy chambers since Katherine Briggs began envisioning what would become the fabled person-sorting Myers-Briggs Type Indicator (MBTI) in 1919. A handful of phone calls, psychological referrals, job applications, and free or modestly priced hyperlinked platforms will place before you (and the eighty million or more other Americans who take these tests annually) more than two thousand personality assessments promising to crack your code. Their efficacy has become an object of our collective speculation. And by many accounts, their revelations make us not only known but also more empowered to live healthy and fulfilling lives. Nietzsche had many things, but he did not have PersonalityMax.com or PersonalityAssessor.com…(More)”.

When Online Content Disappears


Pew Research: “The internet is an unimaginably vast repository of modern life, with hundreds of billions of indexed webpages. But even as users across the world rely on the web to access books, images, news articles and other resources, this content sometimes disappears from view…

  • A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.
A line chart showing that 38% of webpages from 2013 are no longer accessible
  • For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This “digital decay” occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the “References” section of Wikipedia pages as of spring 2023. This analysis found that:

  • 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
  • 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists...(More)”.