“The Death of Wikipedia?” — Exploring the Impact of ChatGPT on Wikipedia Engagement


Paper by Neal Reeves, Wenjie Yin, Elena Simperl: “Wikipedia is one of the most popular websites in the world, serving as a major source of information and learning resource for millions of users worldwide. While motivations for its usage vary, prior research suggests shallow information gathering — looking up facts and information or answering questions — dominates over more in-depth usage. On the 22nd of November 2022, ChatGPT was released to the public and has quickly become a popular source of information, serving as an effective question-answering and knowledge gathering resource. Early indications have suggested that it may be drawing users away from traditional question answering services such as Stack Overflow, raising the question of how it may have impacted Wikipedia. In this paper, we explore Wikipedia user metrics across four areas: page views, unique visitor numbers, edit counts and editor numbers within twelve language instances of Wikipedia. We perform pairwise comparisons of these metrics before and after the release of ChatGPT and implement a panel regression model to observe and quantify longer-term trends. We find no evidence of a fall in engagement across any of the four metrics, instead observing that page views and visitor numbers increased in the period following ChatGPT’s launch. However, we observe a lower increase in languages where ChatGPT was available than in languages where it was not, which may suggest ChatGPT’s availability limited growth in those languages. Our results contribute to the understanding of how emerging generative AI tools are disrupting the Web ecosystem…(More)”. See also: Are we entering a Data Winter? On the urgent need to preserve data access for the public interest.

Training new teachers with digital simulations


Report by the Susan McKinnon Foundation: “This report shows the findings of a rapid review of the global literature on immersive simulation for teacher preparation. It finds that immersive digital simulations – and corresponding supports – can create significant positive shifts in trainee teacher skills, knowledge, and self-efficacy. The evidence is strong; of the 35 articles in our review, 30 studies show positive improvements in trainee teacher outcomes. The 30 studies showing positive effects include studies with rigorous designs, including a comprehensive systematic review with many well designed randomised controlled studies (the ‘gold standard’ of research). Benefits are seen across a range of teaching skills, from classroom management and teaching instruction through to better communication skills with parents and colleagues.


Six active ingredients in the implementation of digital simulations are important. This includes incorporating opportunities for: [1] instructional coaching, [2] feedback, [3] observation, [4] visual examples or models of best practice, [5] high dosage, that is, practicing many times over and [6]
strong underpinning theory and content… (More)”.

LOGIC: Good Practice Principles for Mainstreaming Behavioural Public Policy


OECD Report: “This report outlines good practice principles intended to encourage the incorporation of behavioural perspectives as part of standard policymaking practice in government and governmental organisations. Evidence from the behavioural sciences is potentially transformative in many areas of government policy and administration. The 14 good practice principles, organised into five dimensions, present a guide to the consistent production and application of useful behavioural science evidence. Governments and governmental organisations looking to mainstream behavioural public policy may use the good practice principles and case studies included in this report to assess their current policy systems and develop strategies to further improve them…(More)”

AI Chatbot Credited With Preventing Suicide. Should It Be?


Article by Samantha Cole: “A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.” …(More)”.

Towards a pan-EU Freedom of Information Act? Harmonizing Access to Information in the EU through the internal market competence


Paper by Alberto Alemanno and Sébastien Fassiaux: “This paper examines whether – and on what basis – the EU may harmonise the right of access to information across the Union. It does by examining the available legal basis established by relevant international obligations, such as those stemming from the Council of Europe, and EU primary law. Its demonstrates that neither the Council of Europe – through the European Convention of Human Rights and the more recent Trømso Convention – nor the EU – through Article 41 of the EU Charter of Fundamental Rights – do require the EU to enact minimum standards of access to information. That Charter’s provision combined with Articles 10 and 11 TEU do require instead only the EU institutions – not the EU Member States – to ensure public access to documents, including legislative texts and meeting minutes. Regulation 1049/2001 was adopted (originally Art. 255 TEC) on such a legal basis and should be revised accordingly. The paper demonstrates that the most promising legal basis enabling the EU to proceed towards the harmonisation of access to information within the EU is offered by Article 114 TFEU. It argues hat the harmonisation of the conditions governing access to information across Member States would facilitate cross-border activities and trade, thus enhancing the internal market. Moreover, this would ensure equal access to information for all EU citizens and residents, irrespective of their location within the EU. Therefore, the question is not whether but how the EU may – under Article 114 TFEU – act to harmonise access to information. If the EU enjoys wide legislative discretion under Article 114(1) TFEU, this is not absolute but is subject to limits derived from fundamental rights and principles such as proportionality, equality, and subsidiarity. Hence, the need to design the type of harmonisation capable of preserving existing national FOIAs while enhancing the weakest ones. The only type of harmonisation fit for purpose would therefore be minimal, as opposed to maximal, by merely defining the minimum conditions required on each Member State’s national legislation governing the access to information…(More)”.

Digital Media and Grassroots Anti-Corruption


Open access book edited by Alice Mattoni: “Delving into a burgeoning field of research, this enlightening book utilises case studies from across the globe to explore how digital media is used at the grassroots level to combat corruption. Bringing together an impressive range of experts, Alice Mattoni deftly assesses the design, creation and use of a wide range of anti-corruption technologies…(More)”.

Digitalization in Practice


Book edited by Jessamy Perriam and Katrine Meldgaard Kjær: “..shows that as welfare is increasingly digitalized, an investigation of the social implications of this digitalization becomes increasingly pertinent. The book offers chapters on how the state operates, from the day-to-day practices of governance to keeping registers of businesses, from overarching and sometimes contradictory policies to considering how to best include citizens in digitalized processes. Moreover, the book takes a citizen perspective on key issues of access, identification and social harm to consider the social implications of digitalization in the everyday. The diversity of topics in Digitalization in Practice reflects how digitalization as an ongoing process and practice fundamentally impacts and often reshapes the relationship between states and citizens.

  • Provides much needed critical perspectives on digital states in practice.
  • Opens up provocative questions for further studies and research topics in digital states.
  • Showcases empirical studies of situations where digital states are enacted…(More)”.

More Questions Than Flags: Reality Check on DSA’s Trusted Flaggers


Article by Ramsha Jahangir, Elodie Vialle and Dylan Moses: “It’s been 100 days since the Digital Services Act (DSA) came into effect, and many of us are still wondering how the Trusted Flagger mechanism is taking shape, particularly for civil society organizations (CSOs) that could be potential applicants.

With an emphasis on accountability and transparency, the DSA requires national coordinators to appoint Trusted Flaggers, who are designated entities whose requests to flag illegal content must be prioritized. “Notices submitted by Trusted Flaggers acting within their designated area of expertise . . . are given priority and are processed and decided upon without undue delay,” according to the DSA. Trusted flaggers can include non-governmental organizations, industry associations, private or semi-public bodies, and law enforcement agencies. For instance, a private company that focuses on finding CSAM or terrorist-type content, or tracking groups that traffic in that content, could be eligible for Trusted Flagger status under the DSA. To be appointed, entities need to meet certain criteria, including being independent, accurate, and objective.

Trusted escalation channels are a key mechanism for civil society organizations (CSOs) supporting vulnerable users, such as human rights defenders and journalists targeted by online attacks on social media, particularly in electoral contexts. However, existing channels could be much more efficient. The DSA is a unique opportunity to redesign these mechanisms for reporting illegal or harmful content at scale. They need to be rethought for CSOs that hope to become Trusted Flaggers. Platforms often require, for instance, content to be translated into English and context to be understood by English-speaking audiences (due mainly to the fact that the key decision-makers are based in the US), which creates an added burden for CSOs that are resource-strapped. The lack of transparency in the reporting process can be distressing for the victims for whom those CSOs advocate. The lack of timely response can lead to dramatic consequences for human rights defenders and information integrity. Several CSOs we spoke with were not even aware of these escalation channels – and platforms are not incentivized to promote mechanisms given the inability to vet, prioritize and resolve all potential issues sent to them….(More)”.

The Simple Macroeconomics of AI


Paper by Daron Acemoglu: “This paper evaluates claims about large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. So long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value…(More)”.

Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science


Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China. 

While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.