Generative AI and Democracy: Impacts and Interventions


Report by Demos (UK): “This week’s election announcement has set all political parties firmly into campaign mode and over the next 40 days the public will be weighing up who will get their vote on 4th July.

This democratic moment, however, will take place against the backdrop of a new and largely untested threat; generative-AI. In the lead up to the election, the strength of our electoral integrity is likely to be tested by the spread of AI-generated content and deepfakes – an issue that over 60% of the public are concerned about, according to recent Demos and Full Fact polling.

Our new paper takes a look at the near and long-term solutions at our disposal for bolstering the resilience of our democratic institutions amidst the modern technological age. We explore the top four pressing mechanisms by which generative-AI challenges the stability of democracy, and how to mitigate them…(More)”.

AI Chatbot Credited With Preventing Suicide. Should It Be?


Article by Samantha Cole: “A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.” …(More)”.

Science in the age of AI


Report by the Royal Society: “The unprecedented speed and scale of progress with artificial intelligence (AI) in recent years suggests society may be living through an inflection point. With the growing availability of large datasets, new algorithmic techniques and increased computing power, AI is becoming an established tool used by researchers across scientific fields who seek novel solutions to age-old problems. Now more than ever, we need to understand the extent of the transformative impact of AI on science and what scientific communities need to do to fully harness its benefits. 

This report, Science in the age of AI (PDF), explores how AI technologies, such as deep learning or large language models, are transforming the nature and methods of scientific inquiry. It also explores how notions of research integrity; research skills or research ethics are inevitably changing, and what the implications are for the future of science and scientists. 

The report addresses the following questions: 

  • How are AI-driven technologies transforming the methods and nature of scientific research? 
  • What are the opportunities, limitations, and risks of these technologies for scientific research? 
  • How can relevant stakeholders (governments, universities, industry, research funders, etc) best support the development, adoption, and uses of AI-driven technologies in scientific research? 

In answering these questions, the report integrates evidence from a range of sources, including research activities with more than 100 scientists and the advisement of an expert Working group, as well as a taxonomy of AI in science (PDF), a historical review (PDF) on the role of disruptive technologies in transforming science and society, and a patent landscape review (PDF) of artificial intelligence related inventions, which are available to download…(More)”

What are location services and how do they work?


Article by Douglas Crawford: “Location services refer to a combination of technologies used in devices like smartphones and computers that use data from your device’s GPS, WiFi, mobile (cellular networks), and sometimes even Bluetooth connections to determine and track your geographic location.

This information can be accessed by your operating system (OS) and the apps installed on your device. In many cases, this allows them to perform their purpose correctly or otherwise deliver useful content and features. 

For example, navigation/map, weather, ridesharing (such Uber or Lyft), and health and fitness tracking apps require location services to perform their functions, while datingtravel, and social media apps can offer additional functionality with access to your device’s location services (such as being able to locate a Tinder match or see recommendations for nearby restaurants ).

There’s no doubt location services (and the apps that use them) can be useful. However, the technology can be (and is) also abused by apps to track your movements. The apps then usually sell this information to advertising and analytics companies  that combine it with other data to create a profile of you, which they can then use to sell ads. 

Unfortunately, this behavior is not limited to “rogue” apps. Apps usually regarded as legitimate, including almost all Google apps, Facebook, Instagram, and others, routinely send detailed and highly sensitive location details back to their developers by default. And it’s not just apps — operating systems themselves, such as Google’s Android and Microsoft Windows also closely track your movements using location services. 

This makes weighing the undeniable usefulness of location services with the need to maintain a basic level of privacy a tricky balancing act. However, because location services are so easy to abuse, all operating systems include built-in safeguards that give you some control over their use.

In this article, we’ll look at how location services work..(More)”.

Towards a pan-EU Freedom of Information Act? Harmonizing Access to Information in the EU through the internal market competence


Paper by Alberto Alemanno and Sébastien Fassiaux: “This paper examines whether – and on what basis – the EU may harmonise the right of access to information across the Union. It does by examining the available legal basis established by relevant international obligations, such as those stemming from the Council of Europe, and EU primary law. Its demonstrates that neither the Council of Europe – through the European Convention of Human Rights and the more recent Trømso Convention – nor the EU – through Article 41 of the EU Charter of Fundamental Rights – do require the EU to enact minimum standards of access to information. That Charter’s provision combined with Articles 10 and 11 TEU do require instead only the EU institutions – not the EU Member States – to ensure public access to documents, including legislative texts and meeting minutes. Regulation 1049/2001 was adopted (originally Art. 255 TEC) on such a legal basis and should be revised accordingly. The paper demonstrates that the most promising legal basis enabling the EU to proceed towards the harmonisation of access to information within the EU is offered by Article 114 TFEU. It argues hat the harmonisation of the conditions governing access to information across Member States would facilitate cross-border activities and trade, thus enhancing the internal market. Moreover, this would ensure equal access to information for all EU citizens and residents, irrespective of their location within the EU. Therefore, the question is not whether but how the EU may – under Article 114 TFEU – act to harmonise access to information. If the EU enjoys wide legislative discretion under Article 114(1) TFEU, this is not absolute but is subject to limits derived from fundamental rights and principles such as proportionality, equality, and subsidiarity. Hence, the need to design the type of harmonisation capable of preserving existing national FOIAs while enhancing the weakest ones. The only type of harmonisation fit for purpose would therefore be minimal, as opposed to maximal, by merely defining the minimum conditions required on each Member State’s national legislation governing the access to information…(More)”.

Digital Media and Grassroots Anti-Corruption


Open access book edited by Alice Mattoni: “Delving into a burgeoning field of research, this enlightening book utilises case studies from across the globe to explore how digital media is used at the grassroots level to combat corruption. Bringing together an impressive range of experts, Alice Mattoni deftly assesses the design, creation and use of a wide range of anti-corruption technologies…(More)”.

Private Thought and Public Speech


Essay by David Bromwich: “The past decade has witnessed a notable rise in the deployment of outrageous speech and censorship: opposite tendencies, on the face of things, which actually strengthen each other’s claim. My aim in this essay is to defend the traditional civil libertarian argument against censorship, without defending outrageous speech. By outrageous, I should add, I don’t mean angry or indignant or accusing speech, of the sort its opponents call “extreme” (often because it expresses an opinion shared by a small minority). Spoken words of this sort may give an impetus to thought, and their existence is preferable to anything that could be done to silence them. Outrageous speech, by contrast, is speech that means only to enrage, and not to convey any information or argument, in however primitive a form. No intelligent person wishes there were more of it. But, for the survival of a free society, censorship is far more dangerous.  

Let me try for a closer description of these rival tendencies. On the one hand, there is the unembarrassed publication of the degrading epithet, the intemperate accusation, the outlandish verbal assault against a person thought to be an erring member of one’s own milieu; and on the other hand, the bureaucratized penalizing of inappropriate speech (often classified as such quite recently) which has become common in the academic, media, professional, and corporate workplace. …(More)”.

The not-so-silent type: Vulnerabilities across keyboard apps reveal keystrokes to network eavesdroppers


Report by Jeffrey KnockelMona Wang, and Zoë Reichert: “Typing logographic languages such as Chinese is more difficult than typing alphabetic languages, where each letter can be represented by one key. There is no way to fit the tens of thousands of Chinese characters that exist onto a single keyboard. Despite this obvious challenge, technologies have developed which make typing in Chinese possible. To enable the input of Chinese characters, a writer will generally use a keyboard app with an “Input Method Editor” (IME). IMEs offer a variety of approaches to inputting Chinese characters, including via handwriting, voice, and optical character recognition (OCR). One popular phonetic input method is Zhuyin, and shape or stroke-based input methods such as Cangjie or Wubi are commonly used as well. However, used by nearly 76% of mainland Chinese keyboard users, the most popular way of typing in Chinese is the pinyin method, which is based on the pinyin romanization of Chinese characters.

All of the keyboard apps we analyze in this report fall into the category of input method editors (IMEs) that offer pinyin input. These keyboard apps are particularly interesting because they have grown to accommodate the challenge of allowing users to type Chinese characters quickly and easily. While many keyboard apps operate locally, solely within a user’s device, IME-based keyboard apps often have cloud features which enhance their functionality. Because of the complexities of predicting which characters a user may want to type next, especially in logographic languages like Chinese, IMEs often offer “cloud-based” prediction services which reach out over the network. Enabling “cloud-based” features in these apps means that longer strings of syllables that users type will be transmitted to servers elsewhere. As many have previously pointed out, “cloud-based” keyboards and input methods can function as vectors for surveillance and essentially behave as keyloggers. While the content of what users type is traveling from their device to the cloud, it is additionally vulnerable to network attackers if not properly secured. This report is not about how operators of cloud-based IMEs read users’ keystrokes, which is a phenomenon that has already been extensively studied and documented. This report is primarily concerned with the issue of protecting this sensitive data from network eavesdroppers…(More)”.

Digitalization in Practice


Book edited by Jessamy Perriam and Katrine Meldgaard Kjær: “..shows that as welfare is increasingly digitalized, an investigation of the social implications of this digitalization becomes increasingly pertinent. The book offers chapters on how the state operates, from the day-to-day practices of governance to keeping registers of businesses, from overarching and sometimes contradictory policies to considering how to best include citizens in digitalized processes. Moreover, the book takes a citizen perspective on key issues of access, identification and social harm to consider the social implications of digitalization in the everyday. The diversity of topics in Digitalization in Practice reflects how digitalization as an ongoing process and practice fundamentally impacts and often reshapes the relationship between states and citizens.

  • Provides much needed critical perspectives on digital states in practice.
  • Opens up provocative questions for further studies and research topics in digital states.
  • Showcases empirical studies of situations where digital states are enacted…(More)”.

More Questions Than Flags: Reality Check on DSA’s Trusted Flaggers


Article by Ramsha Jahangir, Elodie Vialle and Dylan Moses: “It’s been 100 days since the Digital Services Act (DSA) came into effect, and many of us are still wondering how the Trusted Flagger mechanism is taking shape, particularly for civil society organizations (CSOs) that could be potential applicants.

With an emphasis on accountability and transparency, the DSA requires national coordinators to appoint Trusted Flaggers, who are designated entities whose requests to flag illegal content must be prioritized. “Notices submitted by Trusted Flaggers acting within their designated area of expertise . . . are given priority and are processed and decided upon without undue delay,” according to the DSA. Trusted flaggers can include non-governmental organizations, industry associations, private or semi-public bodies, and law enforcement agencies. For instance, a private company that focuses on finding CSAM or terrorist-type content, or tracking groups that traffic in that content, could be eligible for Trusted Flagger status under the DSA. To be appointed, entities need to meet certain criteria, including being independent, accurate, and objective.

Trusted escalation channels are a key mechanism for civil society organizations (CSOs) supporting vulnerable users, such as human rights defenders and journalists targeted by online attacks on social media, particularly in electoral contexts. However, existing channels could be much more efficient. The DSA is a unique opportunity to redesign these mechanisms for reporting illegal or harmful content at scale. They need to be rethought for CSOs that hope to become Trusted Flaggers. Platforms often require, for instance, content to be translated into English and context to be understood by English-speaking audiences (due mainly to the fact that the key decision-makers are based in the US), which creates an added burden for CSOs that are resource-strapped. The lack of transparency in the reporting process can be distressing for the victims for whom those CSOs advocate. The lack of timely response can lead to dramatic consequences for human rights defenders and information integrity. Several CSOs we spoke with were not even aware of these escalation channels – and platforms are not incentivized to promote mechanisms given the inability to vet, prioritize and resolve all potential issues sent to them….(More)”.