Book by Eleonore Fournier-Tombs: “This book explores gender norms and women’s rights in the age of AI. The author examines how gender dynamics have evolved in the spheres of work, self-image and safety, and education, and how these might be reflected in current challenges in AI development. The book also explores opportunities in AI to address issues facing women, and how we might harness current technological developments for gender equality. Taking a narrative tone, the book is interwoven with stories and a reflection on the raising young children during the COVID-19 pandemic. It includes both expert and personal interviews to create a nuanced and multidimensional perspective on the state of women’s rights and what might be done to move forward…(More)”.
Towards a Considered Use of AI Technologies in Government
Report by the Institute on Governance and Think Digital: “… undertook a case study-based research project, where 24 examples of AI technology projects and governance frameworks across a dozen jurisdictions were scanned. The purpose of this report is to provide policymakers and practitioners in government with an overview of controversial deployments of Artificial Intelligence (AI) technologies in the public sector, and to highlight some of the approaches being taken to govern the responsible use of these technologies in government.
Two environmental scans make up the majority of the report. The first scan presents relevant use cases of public sector applications of AI technologies and automation, with special attention given to controversial projects and program/policy failures. The second scan surveys existing governance frameworks employed by international organizations and governments around the world. Each scan is then analyzed to determine common themes across use cases and governance frameworks respectively. The final section of the report provides risk considerations related to the use of AI by public sector institutions across use cases…(More)”.
How ChatGPT and other AI tools could disrupt scientific publishing
Article by Gemma Conroy: “When radiologist Domenico Mastrodicasa finds himself stuck while writing a research paper, he turns to ChatGPT, the chatbot that produces fluent responses to almost any query in seconds. “I use it as a sounding board,” says Mastrodicasa, who is based at the University of Washington School of Medicine in Seattle. “I can produce a publication-ready manuscript much faster.”
Mastrodicasa is one of many researchers experimenting with generative artificial-intelligence (AI) tools to write text or code. He pays for ChatGPT Plus, the subscription version of the bot based on the large language model (LLM) GPT-4, and uses it a few times a week. He finds it particularly useful for suggesting clearer ways to convey his ideas. Although a Nature survey suggests that scientists who use LLMs regularly are still in the minority, many expect that generative AI tools will become regular assistants for writing manuscripts, peer-review reports and grant applications.
Those are just some of the ways in which AI could transform scientific communication and publishing. Science publishers are already experimenting with generative AI in scientific search tools and for editing and quickly summarizing papers. Many researchers think that non-native English speakers could benefit most from these tools. Some see generative AI as a way for scientists to rethink how they interrogate and summarize experimental results altogether — they could use LLMs to do much of this work, meaning less time writing papers and more time doing experiments…(More)”.
The growing energy footprint of artificial intelligence
Paper by Alex de Vries: “Throughout 2022 and 2023, artificial intelligence (AI) has witnessed a period of rapid expansion and extensive, large-scale application. Prominent tech companies such as Alphabet and Microsoft significantly increased their support for AI in 2023, influenced by the successful launch of OpenAI’s ChatGPT, a conversational generative AI chatbot that reached 100 million users in an unprecedented 2 months. In response, Microsoft and Alphabet introduced their own chatbots, Bing Chat and Bard, respectively.
This accelerated development raises concerns about the electricity consumption and potential environmental impact of AI and data centers. In recent years, data center electricity consumption has accounted for a relatively stable 1% of global electricity use, excluding cryptocurrency mining. Between 2010 and 2018, global data center electricity consumption may have increased by only 6%.
There is increasing apprehension that the computational resources necessary to develop and maintain AI models and applications could cause a surge in data centers’ contribution to global electricity consumption.
This commentary explores initial research on AI electricity consumption and assesses the potential implications of widespread AI technology adoption on global data center electricity use. The piece discusses both pessimistic and optimistic scenarios and concludes with a cautionary note against embracing either extreme…(More)”.
Google’s Expanded ‘Flood Hub’ Uses AI to Help Us Adapt to Extreme Weather
Article by Jeff Young: “Google announced Tuesday that a tool using artificial intelligence to better predict river floods will be expanded to the U.S. and Canada, covering more than 800 North American riverside communities that are home to more than 12 million people. Google calls it Flood Hub, and it’s the latest example of how AI is being used to help adapt to extreme weather events associated with climate change.
“We see tremendous opportunity for AI to solve some of the world’s biggest challenges, and climate change is very much one of those,” Google’s Chief Sustainability Officer, Kate Brandt, told Newsweek in an interview.
At an event in Brussels on Tuesday, Google announced a suite of new and expanded sustainability initiatives and products. Many of them involve the use of AI, such as tools to help city planners find the best places to plant trees and modify rooftops to buffer against city heat, and a partnership with the U.S. Forest Service to use AI to improve maps related to wildfires.

Brandt said Flood Hub’s engineers use advanced AI, publicly available data sources and satellite imagery, combined with hydrologic models of river flows. The results allow flooding predictions with a longer lead time than was previously available in many instances…(More)”.
Generative AI, Jobs, and Policy Response
Paper by the Global Partnership on AI: “Generative AI and the Future of Work remains notably absent from the global AI governance dialogue. Given the transformative potential of this technology in the workplace, this oversight suggests a significant gap, especially considering the substantial implications this technology has for workers, economies and society at large. As interest grows in the effects of Generative AI on occupations, debates centre around roles being replaced or enhanced by technology. Yet there is an incognita, the “Big Unknown”, an important number of workers whose future depends on decisions yet to be made
In this brief, recent articles about the topic are surveyed with special attention to the “Big Unknown”. It is not a marginal number: nearly 9% of the workforce, or 281 million workers worldwide, are in this category. Unlike previous AI developments which focused on automating narrow tasks, Generative AI models possess the scope, versatility, and economic viability to impact jobs across multiple industries and at varying skill levels. Their ability to produce human-like outputs in areas like language, content creation and customer interaction, combined with rapid advancement and low deployment costs, suggest potential near-term impacts that are much broader and more abrupt than prior waves of AI. Governments, companies, and social partners should aim to minimize any potential negative effects from Generative AI technology in the world of work, as well as harness potential opportunities to support productivity growth and decent work. This brief presents concrete policy recommendations at the global and local level. These insights, are aimed to guide the discourse towards a balanced and fair integration of Generative AI in our professional landscape To navigate this uncertain landscape and ensure that the benefits of Generative AI are equitably distributed, we recommend 10 policy actions that could serve as a starting point for discussion and implementation…(More)”.
AI chatbots do work of civil servants in productivity trial
Article by Paul Seddon: “Documents disclosed to the BBC have shed light on the use of AI-powered chatbot technology within government.
The chatbots have been used to analyse lengthy reports – a job that would normally be done by humans.
The Department for Education, which ran the trial, hopes it could boost productivity across Whitehall.
The PCS civil service union says it does not object to the use of AI – but clear guidelines are needed “so the benefits are shared by workers”.
The latest generation of chatbots, powered by artificial intelligence (AI), can quickly analyse reams of information, including images, to answer questions and summarise long articles.
They are expected to upend working practices across the economy in the coming years, and the government says they will have “significant implications” for the way officials work in future.
The education department ran the eight-week study over the summer under a contract with London-based company Faculty.ai, to test how so-called large language models (LLMs) could be used by officials.
The firm’s researchers used its access to a premium version of ChatGPT, the popular chatbot developed by OpenAI, to analyse draft local skills training plans that had been sent to the department to review.
These plans, drawn up by bodies representing local employers, are meant to influence the training offered by local further education colleges.
Results from the pilot are yet to be published, but documents and emails requested by the BBC under Freedom of Information laws offer an insight into the project’s aims.
According to an internal document setting out the reasons for the study, a chatbot would be used to summarise and compare the “main insights and themes” from the training plans.
The results, which were to be compared with summaries produced by civil servants, would test how Civil Service “productivity” might be improved.
It added that language models could analyse long, unstructured documents “where previously the only other option for be for individuals to read through all the reports”.
But the project’s aims went further, with hopes the chatbot could help provide “useful insights” that could help the department’s skills unit “identify future skills needs across the country”…(More)”.
The Participatory Turn in AI Design: Theoretical Foundations and the Current State of Practice
Paper by Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang: “Despite the growing consensus that stakeholders affected by AI systems should participate in their design, enormous variation and implicit disagreements exist among current approaches. For researchers and practitioners who are interested in taking a participatory approach to AI design and development, it remains challenging to assess the extent to which any participatory approach grants substantive agency to stakeholders. This article thus aims to ground what we dub the “participatory turn” in AI design by synthesizing existing theoretical literature on participation and through empirical investigation and critique of its current practices. Specifically, we derive a conceptual framework through synthesis of literature across technology design, political theory, and the social sciences that researchers and practitioners can leverage to evaluate approaches to participation in AI design. Additionally, we articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners. We use these empirical findings to understand the current state of participatory practice and subsequently provide guidance to better align participatory goals and methods in a way that accounts for practical constraints…(More)”.
Data Dysphoria: The Governance Challenge Posed by Large Learning Models
Paper by Susan Ariel Aaronson: “Only 8 months have passed since Chat-GPT and the large learning model underpinning it took the world by storm. This article focuses on the data supply chain—the data collected and then utilized to train large language models and the governance challenge it presents to policymakers These challenges include:
• How web scraping may affect individuals and firms which hold copyrights.
• How web scraping may affect individuals and groups who are supposed to be protected under privacy and personal data protection laws.
• How web scraping revealed the lack of protections for content creators and content providers on open access web sites; and
• How the debate over open and closed source LLM reveals the lack of clear and universal rules to ensure the quality and validity of datasets. As the US National Institute of Standards explained, many LLMs depend on “largescale datasets, which can lead to data quality and validity concerns. “The difficulty of finding the “right” data may lead AI actors to select datasets based more on accessibility and availability than on suitability… Such decisions could contribute to an environment where the data used in processes is not fully representative of the populations or phenomena that are being modeled, introducing downstream risks” –in short problems of quality and validity…(More)”.
International Definitions of Artificial Intelligence
Report by IAPP: “Computer scientist John McCarthy coined the term artificial intelligence in 1955, defining it as “the science and engineering of making intelligent machines.” He organized the Dartmouth Summer Research Project on Artificial Intelligence a year later — an event that many consider the birthplace of the field.
In today’s world, the definition of AI has been in continuous evolution, its contours and constraints changing to align with current and perhaps future technological progress and cultural contexts. In fact, most papers and articles are quick to point out the lack of common consensus around the definition of AI. As a resource from British research organization the Ada Lovelace Institute states, “We recognise that the terminology in this area is contested. This is a fast-moving topic, and we expect that terminology will evolve quickly.” The difficulty in defining AI is illustrated by what AI historian Pamela McCorduck called the “odd paradox,” referring to the idea that, as computer scientists find new and innovative solutions, computational techniques once considered AI lose the title as they become common and repetitive.
The indeterminate nature of the term poses particular challenges in the regulatory space. Indeed, in 2017 a New York City Council task force downgraded its mission to regulate the city’s use of automated decision-making systems to just defining the types of systems subject to regulation because it could not agree on a workable, legal definition of AI.
With this understanding, the following chart provides a snapshot of some of the definitions of AI from various global and sectoral (government, civil society and industry) perspectives. The chart is not an exhaustive list. It allows for cross-contextual comparisons from key players in the AI ecosystem…(More)”