“The Death of Wikipedia?” — Exploring the Impact of ChatGPT on Wikipedia Engagement


Paper by Neal Reeves, Wenjie Yin, Elena Simperl: “Wikipedia is one of the most popular websites in the world, serving as a major source of information and learning resource for millions of users worldwide. While motivations for its usage vary, prior research suggests shallow information gathering — looking up facts and information or answering questions — dominates over more in-depth usage. On the 22nd of November 2022, ChatGPT was released to the public and has quickly become a popular source of information, serving as an effective question-answering and knowledge gathering resource. Early indications have suggested that it may be drawing users away from traditional question answering services such as Stack Overflow, raising the question of how it may have impacted Wikipedia. In this paper, we explore Wikipedia user metrics across four areas: page views, unique visitor numbers, edit counts and editor numbers within twelve language instances of Wikipedia. We perform pairwise comparisons of these metrics before and after the release of ChatGPT and implement a panel regression model to observe and quantify longer-term trends. We find no evidence of a fall in engagement across any of the four metrics, instead observing that page views and visitor numbers increased in the period following ChatGPT’s launch. However, we observe a lower increase in languages where ChatGPT was available than in languages where it was not, which may suggest ChatGPT’s availability limited growth in those languages. Our results contribute to the understanding of how emerging generative AI tools are disrupting the Web ecosystem…(More)”. See also: Are we entering a Data Winter? On the urgent need to preserve data access for the public interest.

AI Chatbot Credited With Preventing Suicide. Should It Be?


Article by Samantha Cole: “A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.” …(More)”.

Science in the age of AI


Report by the Royal Society: “The unprecedented speed and scale of progress with artificial intelligence (AI) in recent years suggests society may be living through an inflection point. With the growing availability of large datasets, new algorithmic techniques and increased computing power, AI is becoming an established tool used by researchers across scientific fields who seek novel solutions to age-old problems. Now more than ever, we need to understand the extent of the transformative impact of AI on science and what scientific communities need to do to fully harness its benefits. 

This report, Science in the age of AI (PDF), explores how AI technologies, such as deep learning or large language models, are transforming the nature and methods of scientific inquiry. It also explores how notions of research integrity; research skills or research ethics are inevitably changing, and what the implications are for the future of science and scientists. 

The report addresses the following questions: 

  • How are AI-driven technologies transforming the methods and nature of scientific research? 
  • What are the opportunities, limitations, and risks of these technologies for scientific research? 
  • How can relevant stakeholders (governments, universities, industry, research funders, etc) best support the development, adoption, and uses of AI-driven technologies in scientific research? 

In answering these questions, the report integrates evidence from a range of sources, including research activities with more than 100 scientists and the advisement of an expert Working group, as well as a taxonomy of AI in science (PDF), a historical review (PDF) on the role of disruptive technologies in transforming science and society, and a patent landscape review (PDF) of artificial intelligence related inventions, which are available to download…(More)”

What are location services and how do they work?


Article by Douglas Crawford: “Location services refer to a combination of technologies used in devices like smartphones and computers that use data from your device’s GPS, WiFi, mobile (cellular networks), and sometimes even Bluetooth connections to determine and track your geographic location.

This information can be accessed by your operating system (OS) and the apps installed on your device. In many cases, this allows them to perform their purpose correctly or otherwise deliver useful content and features. 

For example, navigation/map, weather, ridesharing (such Uber or Lyft), and health and fitness tracking apps require location services to perform their functions, while datingtravel, and social media apps can offer additional functionality with access to your device’s location services (such as being able to locate a Tinder match or see recommendations for nearby restaurants ).

There’s no doubt location services (and the apps that use them) can be useful. However, the technology can be (and is) also abused by apps to track your movements. The apps then usually sell this information to advertising and analytics companies  that combine it with other data to create a profile of you, which they can then use to sell ads. 

Unfortunately, this behavior is not limited to “rogue” apps. Apps usually regarded as legitimate, including almost all Google apps, Facebook, Instagram, and others, routinely send detailed and highly sensitive location details back to their developers by default. And it’s not just apps — operating systems themselves, such as Google’s Android and Microsoft Windows also closely track your movements using location services. 

This makes weighing the undeniable usefulness of location services with the need to maintain a basic level of privacy a tricky balancing act. However, because location services are so easy to abuse, all operating systems include built-in safeguards that give you some control over their use.

In this article, we’ll look at how location services work..(More)”.

Towards a pan-EU Freedom of Information Act? Harmonizing Access to Information in the EU through the internal market competence


Paper by Alberto Alemanno and Sébastien Fassiaux: “This paper examines whether – and on what basis – the EU may harmonise the right of access to information across the Union. It does by examining the available legal basis established by relevant international obligations, such as those stemming from the Council of Europe, and EU primary law. Its demonstrates that neither the Council of Europe – through the European Convention of Human Rights and the more recent Trømso Convention – nor the EU – through Article 41 of the EU Charter of Fundamental Rights – do require the EU to enact minimum standards of access to information. That Charter’s provision combined with Articles 10 and 11 TEU do require instead only the EU institutions – not the EU Member States – to ensure public access to documents, including legislative texts and meeting minutes. Regulation 1049/2001 was adopted (originally Art. 255 TEC) on such a legal basis and should be revised accordingly. The paper demonstrates that the most promising legal basis enabling the EU to proceed towards the harmonisation of access to information within the EU is offered by Article 114 TFEU. It argues hat the harmonisation of the conditions governing access to information across Member States would facilitate cross-border activities and trade, thus enhancing the internal market. Moreover, this would ensure equal access to information for all EU citizens and residents, irrespective of their location within the EU. Therefore, the question is not whether but how the EU may – under Article 114 TFEU – act to harmonise access to information. If the EU enjoys wide legislative discretion under Article 114(1) TFEU, this is not absolute but is subject to limits derived from fundamental rights and principles such as proportionality, equality, and subsidiarity. Hence, the need to design the type of harmonisation capable of preserving existing national FOIAs while enhancing the weakest ones. The only type of harmonisation fit for purpose would therefore be minimal, as opposed to maximal, by merely defining the minimum conditions required on each Member State’s national legislation governing the access to information…(More)”.

The not-so-silent type: Vulnerabilities across keyboard apps reveal keystrokes to network eavesdroppers


Report by Jeffrey KnockelMona Wang, and Zoë Reichert: “Typing logographic languages such as Chinese is more difficult than typing alphabetic languages, where each letter can be represented by one key. There is no way to fit the tens of thousands of Chinese characters that exist onto a single keyboard. Despite this obvious challenge, technologies have developed which make typing in Chinese possible. To enable the input of Chinese characters, a writer will generally use a keyboard app with an “Input Method Editor” (IME). IMEs offer a variety of approaches to inputting Chinese characters, including via handwriting, voice, and optical character recognition (OCR). One popular phonetic input method is Zhuyin, and shape or stroke-based input methods such as Cangjie or Wubi are commonly used as well. However, used by nearly 76% of mainland Chinese keyboard users, the most popular way of typing in Chinese is the pinyin method, which is based on the pinyin romanization of Chinese characters.

All of the keyboard apps we analyze in this report fall into the category of input method editors (IMEs) that offer pinyin input. These keyboard apps are particularly interesting because they have grown to accommodate the challenge of allowing users to type Chinese characters quickly and easily. While many keyboard apps operate locally, solely within a user’s device, IME-based keyboard apps often have cloud features which enhance their functionality. Because of the complexities of predicting which characters a user may want to type next, especially in logographic languages like Chinese, IMEs often offer “cloud-based” prediction services which reach out over the network. Enabling “cloud-based” features in these apps means that longer strings of syllables that users type will be transmitted to servers elsewhere. As many have previously pointed out, “cloud-based” keyboards and input methods can function as vectors for surveillance and essentially behave as keyloggers. While the content of what users type is traveling from their device to the cloud, it is additionally vulnerable to network attackers if not properly secured. This report is not about how operators of cloud-based IMEs read users’ keystrokes, which is a phenomenon that has already been extensively studied and documented. This report is primarily concerned with the issue of protecting this sensitive data from network eavesdroppers…(More)”.

The Simple Macroeconomics of AI


Paper by Daron Acemoglu: “This paper evaluates claims about large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. So long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value…(More)”.

A New National Purpose: Harnessing Data for Health


Report by the Tony Blair Institute: “We are at a pivotal moment where the convergence of large health and biomedical data sets, artificial intelligence and advances in biotechnology is set to revolutionise health care, drive economic growth and improve the lives of citizens. And the UK has strengths in all three areas. The immense potential of the UK’s health-data assets, from the NHS to biobanks and genomics initiatives, can unlock new diagnostics and treatments, deliver better and more personalised care, prevent disease and ultimately help people live longer, healthier lives.

However, realising this potential is not without its challenges. The complex and fragmented nature of the current health-data landscape, coupled with legitimate concerns around privacy and public trust, has made for slow progress. The UK has had a tendency to provide short-term funding across multiple initiatives, which has led to an array of individual projects – many of which have struggled to achieve long-term sustainability and deliver tangible benefits to patients.

To overcome these challenges, it will be necessary to be bold and imaginative. We must look for ways to leverage the unique strengths of the NHS, such as its nationwide reach and cradle-to-grave data coverage, to create a health-data ecosystem that is much more than the sum of its many parts. This will require us to think differently about how we collect, manage and utilise health data, and to create new partnerships and models of collaboration that break down traditional silos and barriers. It will mean treating data as a key health resource and managing it accordingly.

One model to do this is the proposed sovereign National Data Trust (NDT) – an endeavour to streamline access to and curation of the UK’s valuable health-data assets…(More)”.

Artificial intelligence, the common good, and the democratic deficit in AI governance


Paper by Mark Coeckelbergh: “There is a broad consensus that artificial intelligence should contribute to the common good, but it is not clear what is meant by that. This paper discusses this issue and uses it as a lens for analysing what it calls the “democracy deficit” in current AI governance, which includes a tendency to deny the inherently political character of the issue and to take a technocratic shortcut. It indicates what we may agree on and what is and should be up to (further) deliberation when it comes to AI ethics and AI governance. Inspired by the republican tradition in political theory, it also argues for a more active role of citizens and (end-)users: not only as participants in deliberation but also in ensuring, creatively and communicatively, that AI contributes to the common good…(More)”.

Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science


Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China. 

While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.