AI Chatbot Credited With Preventing Suicide. Should It Be?


Article by Samantha Cole: “A recent Stanford study lauds AI companion app Replika for “halting suicidal ideation” for several people who said they felt suicidal. But the study glosses over years of reporting that Replika has also been blamed for throwing users into mental health crises, to the point that its community of users needed to share suicide prevention resources with each other.

The researchers sent a survey of 13 open-response questions to 1006 Replika users who were 18 years or older and students, and who’d been using the app for at least one month. The survey asked about their lives, their beliefs about Replika and their connections to the chatbot, and how they felt about what Replika does for them. Participants were recruited “randomly via email from a list of app users,” according to the study. On Reddit, a Replika user posted a notice they received directly from Replika itself, with an invitation to take part in “an amazing study about humans and artificial intelligence.”

Almost all of the participants reported being lonely, and nearly half were severely lonely. “It is not clear whether this increased loneliness was the cause of their initial interest in Replika,” the researchers wrote. 

The surveys revealed that 30 people credited Replika with saving them from acting on suicidal ideation: “Thirty participants, without solicitation, stated that Replika stopped them from attempting suicide,” the paper said. One participant wrote in their survey: “My Replika has almost certainly on at least one if not more occasions been solely responsible for me not taking my own life.” …(More)”.

Science in the age of AI


Report by the Royal Society: “The unprecedented speed and scale of progress with artificial intelligence (AI) in recent years suggests society may be living through an inflection point. With the growing availability of large datasets, new algorithmic techniques and increased computing power, AI is becoming an established tool used by researchers across scientific fields who seek novel solutions to age-old problems. Now more than ever, we need to understand the extent of the transformative impact of AI on science and what scientific communities need to do to fully harness its benefits. 

This report, Science in the age of AI (PDF), explores how AI technologies, such as deep learning or large language models, are transforming the nature and methods of scientific inquiry. It also explores how notions of research integrity; research skills or research ethics are inevitably changing, and what the implications are for the future of science and scientists. 

The report addresses the following questions: 

  • How are AI-driven technologies transforming the methods and nature of scientific research? 
  • What are the opportunities, limitations, and risks of these technologies for scientific research? 
  • How can relevant stakeholders (governments, universities, industry, research funders, etc) best support the development, adoption, and uses of AI-driven technologies in scientific research? 

In answering these questions, the report integrates evidence from a range of sources, including research activities with more than 100 scientists and the advisement of an expert Working group, as well as a taxonomy of AI in science (PDF), a historical review (PDF) on the role of disruptive technologies in transforming science and society, and a patent landscape review (PDF) of artificial intelligence related inventions, which are available to download…(More)”

The Simple Macroeconomics of AI


Paper by Daron Acemoglu: “This paper evaluates claims about large macroeconomic implications of new advances in AI. It starts from a task-based model of AI’s effects, working through automation and task complementarities. So long as AI’s microeconomic effects are driven by cost savings/productivity improvements at the task level, its macroeconomic consequences will be given by a version of Hulten’s theorem: GDP and aggregate productivity gains can be estimated by what fraction of tasks are impacted and average task-level cost savings. Using existing estimates on exposure to AI and productivity improvements at the task level, these macroeconomic effects appear nontrivial but modest—no more than a 0.66% increase in total factor productivity (TFP) over 10 years. The paper then argues that even these estimates could be exaggerated, because early evidence is from easy-to-learn tasks, whereas some of the future effects will come from hard-to-learn tasks, where there are many context-dependent factors affecting decision-making and no objective outcome measures from which to learn successful performance. Consequently, predicted TFP gains over the next 10 years are even more modest and are predicted to be less than 0.53%. I also explore AI’s wage and inequality effects. I show theoretically that even when AI improves the productivity of low-skill workers in certain tasks (without creating new tasks for them), this may increase rather than reduce inequality. Empirically, I find that AI advances are unlikely to increase inequality as much as previous automation technologies because their impact is more equally distributed across demographic groups, but there is also no evidence that AI will reduce labor income inequality. Instead, AI is predicted to widen the gap between capital and labor income. Finally, some of the new tasks created by AI may have negative social value (such as design of algorithms for online manipulation), and I discuss how to incorporate the macroeconomic effects of new tasks that may have negative social value…(More)”.

Artificial intelligence, the common good, and the democratic deficit in AI governance


Paper by Mark Coeckelbergh: “There is a broad consensus that artificial intelligence should contribute to the common good, but it is not clear what is meant by that. This paper discusses this issue and uses it as a lens for analysing what it calls the “democracy deficit” in current AI governance, which includes a tendency to deny the inherently political character of the issue and to take a technocratic shortcut. It indicates what we may agree on and what is and should be up to (further) deliberation when it comes to AI ethics and AI governance. Inspired by the republican tradition in political theory, it also argues for a more active role of citizens and (end-)users: not only as participants in deliberation but also in ensuring, creatively and communicatively, that AI contributes to the common good…(More)”.

Toward a Polycentric or Distributed Approach to Artificial Intelligence & Science


Article by Stefaan Verhulst: “Even as enthusiasm grows over the potential of artificial intelligence (AI), concerns have arisen in equal measure about a possible domination of the field by Big Tech. Such an outcome would replicate many of the mistakes of preceding decades, when a handful of companies accumulated unprecedented market power and often acted as de facto regulators in the global digital ecosystem. In response, the European Group of Chief Scientific Advisors has recently proposed establishing a “state-of-the-art facility for academic research,” to be called the European Distributed Institute for AI in Science (EDIRAS). According to the Group, the facility would be modeled on Geneva’s high-energy physics lab, CERN, with the goal of creating a “CERN for AI” to counterbalance the growing AI prowess of the US and China. 

While the comparison to CERN is flawed in some respects–see below–the overall emphasis on a distributed, decentralized approach to AI is highly commendable. In what follows, we outline three key areas where such an approach can help advance the field. These areas–access to computational resources, access to high quality data, and access to purposeful modeling–represent three current pain points (“friction”) in the AI ecosystem. Addressing them through a distributed approach can not only help address the immediate challenges, but more generally advance the cause of open science and ensure that AI and data serve the broader public interest…(More)”.

AI-enabled Peacekeeping Tech for the Digital Age


Springwise: “There are countless organisations and government agencies working to resolve conflicts around the globe, but they often lack the tools to know if they are making the right decisions. Project Didi is developing those technological tools – helping peacemakers plan appropriately and understand the impact of their actions in real time.

Project Didi Co-founder and CCO Gabe Freund explained to Springwise that the project uses machine learning, big data, and AI to analyse conflicts and “establish a new standard for best practice when it comes to decision-making in the world of peacebuilding.”

In essence, the company is attempting to analyse the many factors that are involved in conflict in order to identify a ‘ripe moment’ when both parties will be willing to negotiate for peace. The tools can track the impact and effect of all actors across a conflict. This allows them to identify and create connections between organisations and people who are doing similar work, amplifying their effects…(More)” See also: Project Didi (Kluz Prize)

Defining AI incidents and related terms


OECD Report: “As AI use grows, so do its benefits and risks. These risks can lead to actual harms (“AI incidents”) or potential dangers (“AI hazards”). Clear definitions are essential for managing and preventing these risks. This report proposes definitions for AI incidents and related terms. These definitions aim to foster international interoperability while providing flexibility for jurisdictions to determine the scope of AI incidents and hazards they wish to address…(More)”.

Building a trauma-informed algorithmic assessment toolkit


Report by Suvradip Maitra, Lyndal Sleep, Suzanna Fay, Paul Henman: “Artificial intelligence (AI) and automated processes provide considerable promise to enhance human wellbeing by fully automating or co-producing services with human service providers. Concurrently, if not well considered, automation also provides ways in which to generate harms at scale and speed. To address this challenge, much discussion to date has focused on principles of ethical AI and accountable algorithms with a groundswell of early work seeking to translate these into practical frameworks and processes to ensure such principles are enacted. AI risk assessment frameworks to detect and evaluate possible harms is one dominant approach, as are a growing body of AI audit frameworks, with concomitant emerging governmental and organisational regulatory settings, and associate professionals.

The research outlined in this report took a different approach. Building on work in social services on trauma-informed practice, researchers identified key principles and a practical framework that framed AI design, development and deployment as a reflective, constructive exercise that resulting in algorithmic supported services to be cognisant and inclusive of the diversity of human experience, and particularly those who have experienced trauma. This study resulted in a practical, co-designed, piloted Trauma Informed Algorithmic Assessment Toolkit.

This Toolkit has been designed to assist organisations in their use of automation in service delivery at any stage of their automation journey: ideation; design; development; piloting; deployment or evaluation. While of particular use for social service organisations working with people who may have experienced past trauma, the tool will be beneficial for any organisation wanting to ensure safe, responsible and ethical use of automation and AI…(More)”.

AI for social good: Improving lives and protecting the planet


McKinsey Report: “…Challenges in scaling AI for social-good initiatives are persistent and tough. Seventy-two percent of the respondents to our expert survey observed that most efforts to deploy AI for social good to date have focused on research and innovation rather than adoption and scaling. Fifty-five percent of grants for AI research and deployment across the SDGs are $250,000 or smaller, which is consistent with a focus on targeted research or smaller-scale deployment, rather than large-scale expansion. Aside from funding, the biggest barriers to scaling AI continue to be data availability, accessibility, and quality; AI talent availability and accessibility; organizational receptiveness; and change management. More on these topics can be found in the full report.

While overcoming these challenges, organizations should also be aware of strategies to address the range of risks, including inaccurate outputs, biases embedded in the underlying training data, the potential for large-scale misinformation, and malicious influence on politics and personal well-being. As we have noted in multiple recent articles, AI tools and techniques can be misused, even if the tools were originally designed for social good. Experts identified the top risks as impaired fairness, malicious use, and privacy and security concerns, followed by explainability (Exhibit 2). Respondents from not-for-profits expressed relatively more concern about misinformation, talent issues such as job displacement, and effects of AI on economic stability compared with their counterparts at for-profits, who were more often concerned with IP infringement…(More)”

Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing)


Book by Salman Khan: “…explores how artificial intelligence and GPT technology will transform learning, and offers a road map for teachers, parents, and students to navigate this exciting (and sometimes intimidating) new world.

A pioneer in the field of education technology, Khan examines the ins and outs of these cutting-edge tools and how they will revolutionize the way we learn and teach. For parents concerned about their children’s success, Khan illustrates how AI can personalize learning by adapting to each student’s individual pace and style, identifying strengths and areas for improvement, and offering tailored support and feedback to complement traditional classroom instruction. Khan emphasizes that embracing AI in education is not about replacing human interaction but enhancing it with customized and accessible learning tools that encourage creative problem-solving skills and prepare students for an increasingly digital world.

But Brave New Words is not just about technology—it’s about what this technology means for our society, and the practical implications for administrators, guidance counselors, and hiring managers who can harness the power of AI in education and the workplace. Khan also delves into the ethical and social implications of AI and large language models, offering thoughtful insights into how we can use these tools to build a more accessible education system for students around the world…(More)”.